ADC vs Soul lang vs Soul and Groove . warning long rant

Hi there all,
I loved the ADC conference I have never been before it was super inspiring and really a life changing experience for me.

I was most happy to see that Vadim from NI has found a version of maths that will allow a filter to be modulated at audio rates.
He only used a sine wav for the LFO I wonder what happens if say a pulse wave modulated at 80% is used … That would be the real test of whether it sounded as good as an old analog synth.
I watched the discussion around Soul with interest.
After visiting the ADC I kept thinking about all the problems that face a “serious” synthesizer performer like me. I started out as an Electrical Engineering student but the pull of the fender rhodes was too great around 1990 and I ended up a musician. When I left school in 1986 I really wanted to build synths. My oberheim and yamaha synths from the 1980s still beat the new stuff but they are often broken, heavy and sluggish. The i7 processor made using a computer as a musical instrument possible and after a couple of years of bugging real programmers to build stuff for me I decided to take a change and enroll in a computer science degree. I now have a few subjects left. I have managed to build a few VST plugins using JUCE but serious understanding of DSP is well beyond me at this point.
I have tried Max, PYO, PureData, Faust and JUCE to try and build things but getting from those platforms into JUCE is a world of trouble. I did manage to make some apps from scratch in pure C++ . If Soul really had as many audio classes as max did it might be really useful and I would love to use it. I want great sounding pulse wave and saw waves that don’t alias and can modulate great sounding filters at audio rates in all sorts of routing combinations. I haven’t been able to do that or find a platform that will do that digitally. I will keep studying but it may well be that in this lifetime a serious understanding of the maths and DSP is beyond me … My Oberheim matrix 1000 or xpander win that contest hands down at the moment. The matrix 1000 is pretty sluggish even after updating the firmware the computer mostly destroys it for being playable. JUNE2 is the best sounding VST synth for me. It has a pretty good emulation of an Oberheim 4 pole low pass and you can modulate it audio rates. There is a lot of aliasing or something but it’s tolerable at this point. Carrying around old bits of gear internationally that are heavy but you say a prayer when you power on isn’t feasible.

Before leaving New York for the ADC I was once again left with the problems of what to bring. Airlines aren’t friendly to luggage these days and NO-ONE makes a decent keyboard controller with a synth action like a fatar keybed that is MODULAR. If only you could put the octaves together and bring as many octaves as you need. The novation mk2sl which is the most playable lightweight keyboard is full of buttons and knobs which make it much bigger and heavier than it needs to be. What I kept thinking at the audio conference is how much of the gear isn’t really aimed at someone like me who can actually play and wants to do it live. I might be crazy but I am pretty sure I can feel the jitter when I play a synth inside a DAW. My little yamaha qy70 feels 1000 times better to play and it’s 20 years old. For a guy like me really into playing “in time” latency is one thing but jitter is pure evil. When I play those soft synths even with just one or 2 tracks in the DAW it feels like I am standing on a boat instead of solid ground. It’s way way better than 2005 in terms of latency but my little dedicated bits of gear like the nord modular G2X kill the computer in terms of how it actually feels to play the thing. I asked to perform at the audio conference open mic and I was really surprised at how poor the sound system was and that there was no keyboard present and that almost none of the talks involved musicians or music making. At the trivia quiz there was a very simple ear training quiz and no one at my table/team had a clue but me what the answer was. During Marina Bosi’s lovely talk about compression algorithms at one point after playing one file and the compressed version apparently I was the only person in the room who could tell that the low mids had been changed by the process. Because the learning curve is so steep to actually use the stuff most of the people involved in this are more into maths and physics than music. I see this as a real problem for a guy like me and why the “industry” seems to be lurching towards eurorack type stuff which is designed for people who work in a bank, go to burning man and smoke bongs and twiddle knobs once they get home from work. I tried the seabord which in theory is a cool idea but from a “virtuoso” keyboard players perspective it is a joke. There is no way you could play the brahms violin concerto or Allan Holdsworth’s “city nights” on that thing it just wont go that fast. Again it’s super cool if you are a teenage kid or an amateur muso who wants to make some noises in his/her bedroom while blasted. If an instrument can’t be turned into a virtuoso instrument to the point that a serious muso can play really well and inspire the next generation with I don’t think it will live very long. That’s why violins, pianos and guitars are still around because great musicians inspired the youngsters to take it up. It was Herbie Hancock playing a fender rhodes that made me drop out of engineering.
I got notice that I could perform at the open mic about 4 hours before. My audio interface was back at the hotel , the conference had no keyboard. Perhaps foolishly I decided to head to the local shop and pick up an alesis 4 octave controller for 70 pounds.
I tested it out in the basement of code node and it worked fine but of course on stage some horrible race condition or CPU spikes occurred rendering the whole thing pretty much useless. Perhaps forgetting to turn off the wifi caused a conflict with the Apple headphone audio . Perhaps the drivers the Alesis invoked didn’t work with the combination I had. Who knows but a brand new USB keyboard, a pretty new version of cubase and a couple of soft synths failed miserably . Even with my “rig” like any computer program 5 times out of 100 it misbehaves and is useless.
It just seems absurd to me that in 2018 you have to lug around keyboards to have something reliable.

I tried the Bela board and another musician friend bought one too. We planned to build lots of cool effects on it. He like me is a “serious muso” a jazz guitarist who knows also his way around a pointer and could probably declare a 3d array on the heap. Unfortunately we both had the same problem that that little Bela box . It made a high pitched noise which sounded like USB power leaking into the minijack. It only runs at 44.1 and seems like is great if you want to build a star wars light sabre emulator but not so good for a really great sounding DSP box. I can REALLY HEAR THE DIFFERENCE between a digital chorus running at 44.1 and 48khz . It’s a HUGE difference to my ears. Why doesn’t someone build a box like that that is programmable but really designed for an audio head like me?? Why doesn’t it have midi in and 1/4 inch shielded connections??
I tried running a psola algorithm on the Bela. A thread conflict caused the cpu not to be rendering the samples fast enough and it crapped out. The Bela guys sent me a vague email about how to incorporate threading on the device but I couldn’t figure it out.
I studied CUDA , posix threads etc at uni but the bela system had no decent documentation or examples that helped me to solve it.
That thing is sitting in the drawer at this point.

What I want is a really great audio interface that is programmable with some sort of language ( could be “soul” I don’t care if I can learn it ) . It would be great if it could come with a DAW and run a VST instrument. It would be super fun if it had some analog filters etc maybe a DCO style synth on a chip after the DAC and another DAC so you had the option to send audio back to the DAW for effects processing.
What would be even cooler would be a super portable one octave high quality keyboard that had that audio interface in it and you could bring other octaves in a modular fashion.
I heard @jules mention that midi might drop out briefly if the DSP couldn’t keep up. How would that affect a guy like me playing a zillion notes a second into a synth maxing out the CPU to render a nice waveform. Why give priority to the audio ? That’s my other beef with the makers of DAWs that they don’t provide a midi insert. Cubase who wrote the damn specs has their own midi inserts for their own midi plugins but people like me who build different stuff are forced to come up with an insane routing to get it to work.

Hopefully all of you super smart people who build these things will continue to keep in mind that occasionally people like me who actually want to “play” the stuff might use it.

Sorry to moan but there you have it.

Thanks again @jules and everyone else for such a fantastic few days …

Perhaps someone here wants to start a business with me to try and build something really useful for a guy/gal like me.

Londoners I am performing here tomorrow ( sunday 25th 630-930pm no cover with a great jazz/funk band )
The gate house in highgate …



This is a very interesting post, it’s great to get to know a professional musician’s perspective on nowaday’s audio soft- and hardware, especially from a technical point of view.

If I understand correctly, your main issue with the mentioned technologies are latency, sound quality and performance.

The issue with Latency

When working with a DAW, it’s pretty much impossible to avoid a certain base latency - be it the A/D converters or the overhead of processing buffers of data (as explained by Jules in the beginning of his SOUL talk).
These issues may only be solved using better hardware combinations, and faster processors.
However, as soon as you start adding some effects onto the audio, be it only a simple EQ or a Compressor, the algorithms themselves must introduce some latency to be able to perform their DSP magic, as they might need to access “future” data - there’s just no way around it.

Dedicated hardware always beats software

It’s not surprising at all that you find analogue instruments (and synthesizers) to be much more usable than software synths running inside a DAW - these machines are built for a single purpose, which is producing audio, so naturally they excel at it much better than a PC running an OS like Windows or macOS, which were never intended to be realtime-safe.
In my opinion, dedicated platforms like the Bela board are the best way to get the realtime-safety of dedicated operating systems and DSP chips without having to hardwire any DSP algorithms (or having to write them in an assembler language).
I don’t know why you haven’t been able to implement a PSOLA algorithm on the Bela, from what I’ve heard, it should have a decent enough processing power - perhaps you’re simply doing some non-realtime-safe operations? In any case, I’ve been wanting to get my hands on a copy for quite some time, so I’d be interested in buying yours :slight_smile: Funnily enough, I’m planning to do some PSOLA-based processing with it as well. Please DM me if you wanna sell it.

What do you mean by “the jitter” you can feel when playing synths using a DAW? I totally see why even the slightest latency can throw you off (I’m pretty sensitive to it myself), but haven’t experienced any jitter, as you call it.

The issue with aliasing in synth voices is definitely solvable on a DSP level - software like Xfer Records Serum allows synthesizing of custom waveforms with virtually no aliasing. I can recommend you to try out this synth if you’re not looking for emulations of analogue synths, to which you should probably stick if you’re looking for that precise sound, as software emulations will probably never beat 'em.

Thanks Pixel,
At least someone responded and I didn’t feel like I was pissing into the Thames.
A friend at the conference showed me that synth which I must say looks very useful and worth exploring. As I understand once you have an audio interface, USB midi controller DAW and some plugins going there is jitter introduced all over the place. As a computer science student my experience tells me it is pretty impossible to predict and might be different on a different day. 10 ms of latency is roughly equivalent to your amp being 10 ft away. That’s something a jazz musician can cope with. No instrument speaks quickly, even a piano will be slower to respond with the lower notes. Imagine though varying amounts of jitter and effectively your amp is moving to positions from 3 to 15 ft away from you in a random fashion. If you are obsessed with time and “placement” like me and most jazz and funk musicians I think that variability is really noticeable. It’s also probably going to radically change the sound of say a delay based modulated chorus which relies on constant shifting of signals by small amounts. Perhaps in that situation the results may be pleasing. It’s pretty obvious to me that the audio and midi processing need to be taken away from all the other interrupts and scheduling for them to work with. I really have no need for a gui updating the screen when I perform. A simple 2 bit LED with a patch number would be more than useful for me. Instead with those DAW based instruments the audio thread is waiting for the CPU to process a slider being moved across the screen and rendering video. Absurd !
Musicians like me really need a dedicated sound card which can be programmed . It could come with some editing software like a nord modular G2X but the idea of a recording studio visuals being rendered and an OS waiting for a computer with loads of interrupts makes no sense. The bela might be useful if it had a great DAC that ran at high sample rates and was properly housed in a shielded box with shielded 1/4 inch outputs etc. Why should a musician like me be forced to solve an electrical engineering problem like that to make music with it?
Unfortunately for me most people who use this stuff want a photo of some old synth on the screen of their mac and have a controller with 1000 of knobs…

So by “jitter”, you mean a varying latency while making music - that indeed sounds awful, but it’s not something I’ve ever experienced when recording, although I am not a classically trained musician, so perhaps I don’t have the same feeling for such things as you do.

I have to admit that I can’t really follow you here - the DAWs request the audio thread to have highest priority on the operating system in order to fulfill the real-time requirement, and it is an absolute MUST for any VST instrument or plug-in to NEVER make the audio thread wait on any other thread, be it the GUI thread or even some data stream from the disk.
In my experience using Logic Pro X with various plug-ins on my Macbook, the audio thread never stalls due to the plug-in GUIs.
If it crackles, it’s due to some other processing going on on the machine (e.g. Google Chrome running in the background), but these processes can simply be exited when working with Logic.

What’s the setup you experience issues with?

Hi Crushed pixel,
Every soundcard and DAW and midi setup has a different flavor of timing problems. The most “solid” solution is midi into a firewire interface. Midi over USB can be faster with newer drivers but seems much more likely to hang up or feel jittery. Cubase sounds better to me than Reaper but Reaper seems to be a bit more lightweight. Mainstage feels pretty good but I much prefer the sound of Cubase to that too. You would think it’s all in my head but I really hear the difference I think. The way the DAW deals with scheduling tracks and inserts etc is obviously a factor and I would think that USB would have a priority as high as Audio. Imagine a dongle in cubase, 2 midi keyboards and an audio card and a seperate audio thread all racing to the CPU. Imagine playing a chord on one keyboard and a lot of fast notes on another. The DAW is trying to render audio ( and has other threads waiting to do stuff like deal with incoming midi or the GUI ) …
Sure the audio is the highest “priority” but it’s easy to see how a succession of midi notes may not have their audio rendered at the same constant latency. I notice it. Sean