New audio subsystem

Started by iWasAdam, July 16, 2018, 12:16:32

Previous topic - Next topic

Derron

I see and hear the differences.


> And... You've got to do constantly - you cant have a null buffer or you will get a crash!
Are you sure about this? Isn't a buffer some kind of "memory block" which the audio "device" (OpenAL et al) plays over and over. When we played with "ogg streaming" in BlitzMax (old blitzmax.com days) it was all about filling the buffer ahead of time (eg. you got a 2 second buffer - if the buffer was played at position "1 second" then you streamed the next second in the 0...1s portion of the buffer). If you did not do that you ended up with a looping sound consisting of different sounds (new suddenly stops and the old one plays until the new one starts).

So as I thought: your audio system is more about bringing the "audio data" to the output - the output itself is provided by eg. OpenAL.


Do not forget that everything has to be done in an external thread as else the buffer won't get refilled in time if you do some cpu-heavy stuff - eg. do some image-loading. For that ogg-streaming-thing this means that the provider of new data was outsourced into a c-file as "vanilla blitzmax" is building "non threaded" by default (NG does build threaded as default).


bye
Ron

iWasAdam

QuoteI see and hear the differences.

You know. I shouted and shouted that this was happening on the monkey2 forums. even posted apps, examples, suggestions. you name it I did it. And Mark did NOTHING!

I even wanted to See if I could employ him to make the changes and he didn't even reply... Go figure  8)

Qube

Crikey! The MacOS version sounds like it running at half the bit rate with volume cutoff. Is it really that bad?

QuoteYou know. I shouted and shouted that this was happening on the monkey2 forums. even posted apps, examples, suggestions. you name it I did it. And Mark did NOTHING!
I could dive into a speech about this but I'll simply say this... Just ditch Monkey2 and find something else. If the developer can't be arsed then why should you put the effort into the language?
Mac Studio M1 Max ( 10 core CPU - 24 core GPU ), 32GB LPDDR5, 512GB SSD,
Beelink SER7 Mini Gaming PC, Ryzen 7 7840HS 8-Core 16-Thread 5.1GHz Processor, 32G DDR5 RAM 1T PCIe 4.0 SSD
MSI MEG 342C 34" QD-OLED Monitor

Until the next time.

iWasAdam

#18
I went back to soundcoud and looked at the output from both examples. I didn't alter any volume, etc and used the same capture methods on both. but they do 'look' very different. which as you say feels like half the bitrate, etc.

Yep, and you're right about giving up on monkey2. I now use MX2, which is my monkey2 fork, with my own custom editor, etc. The only thing I know that's coming is Apple ditching OpenGL. That would mean shifting completely from mx2 to another language.

One really good thing that can be said is I have been forced into really learning and getting a grip on some interesting stuff. I thought I knew audio, but now I have complete control of it in a way I didn't think I could...

iWasAdam

Now starting to fit things together and work on the UI:


so...

- Above shows an input (this is actually the keyboard which maps to a music keyboard spanning 10 octaves).
- The there is the 'SynOsc'. This is a tweaked 4 voice sound source, with each voice being a sin wave each an octave higher. this is sort of like organs stops. Each voice has a mix, and amount, plus an over smooth ratio.
- Next up is a sample. this is actually wavetable, but just being played as a straight sample

and how does it sound?
https://soundcloud.com/mavryck-james/synosc

Here you can really start to hear the possibilities. the first part is just some random notes, the second part (listen closely) shows some modulation of the amount going to the first oscillator, and the last part just some low notes - you can also hear clicking there at very low frequencies with this generator.

And here is the SynOsc on its own without any other sound source:
https://soundcloud.com/mavryck-james/just-synosc

iWasAdam

ok now I've got something to show...


top line shows 4 options:
Sources, FX1, FX2 and Shapers (shapers being currently selected)
to the right you can see the available 'blocks'

the bottom section shows the current audio block chain (in this case showing keyboard -> SynOsc -> To Stereo)

to create just drag/drop from the top to the bottom, or drag the blocks around. everything automatically sizes for you.
Icons disappear and become attached to the mouse, and places light up when you can drop them.

I thought this was an obvious a simple way to create new chain.

Thoughts?

Derron

This might be useful for the "visual coder" or in this case "visual music creator". Most of us surely just want to add effects to existing wave data (some ogg SFX files, some background music or "noise" like an ambient wind sound blowing from left speaker to right speaker). For that (me included) the visual aspect of a "music/sound composer" is not that important as the framework/api behind.


bye
Ron

Naughty Alien

..im sorry guys, just a quick side question as its sound processing related...what software for sound editing/sfx you use (i guess freeware) ??

iWasAdam

@ Naughty Alien
I use my own software for creating, adjusting audio, cutting bits, etc:
https://adamstrange.itch.io/qasardio
I don't use any software for layering as I have a large library to start with, etc.

@ Derron
Yep. I can see what you mean, but you are thinking about audio as a simple wav file you just load and play. Think of audio as audio  synthesis and you are getting closer. This is a complete realtime block synthesis. where you define the actual synthesis method. Its a bit like instead of playing a sound, you create it, you can sculpt it, mangle it. whatever.
It's not a synthesiser. It's a customised rack - you pick what you want and how you want it. hence the visual side is a much better way to deal with it.

Here's that pesky wind example you wanted:


You can now see the faders being added in with their parameters. So we start off with white noise. that goes into a filter where we can sculpt the sound of the incoming noise to sound more like wind. and finally add some delay to thicken things up and give it a bit more...? Something?

That would still not be too interesting, so you would need to add some modulation into things. These would hook directly to the faders and automate things for you (I haven't started those yet)

iWasAdam

lots of tidying up and development of the sound sources and FX.

Now starting to turn my head to the control parameters:


You can see on the top row we have the available controls - these are the same for every voice/synth
coming down LFO1 has been selected and is showing it's available parameters
These are arranged in 3 sections:
1. input - is there any control coming into it?
2. control - what controls are available? These are mainly faders but a few buttons.
3. output - here there will be the available outputs, which you can drag/drop to the inputs of the synth

I think that Derron can start to see why (in this case) having it all visual make sense?

The output can be mapped to ANY synth input, which will update visually. so you can see the LFO being applied to the synth parameters!

So to recap:
synth/voice is user definable. it could be simple sample playback, or a mono synth, or a drum synth, or a wavetable, or any mix of them. (turn up the speakers, create a bass synth and blow your walls down!)

lets say you want to have  a piano sound that decays in volume? You will need to use a envelop on the control section, and link it to (well wherever) anywhere you want

Then just call it with Synth.NoteOn( pitch )




iWasAdam

the thing missing from the previous example was how do you link everything together...

;D :o

iWasAdam

ok. still banging aways at stuff.
I've now got the correct 'socket' appearing and the ability to link them together with wires:


the base LFO is operational, but only the UI wire are connected, need to added the internal routings now. the core is there, but not the code itself.

As shown above, each ne wire added has a different color. and each socket only support one wire. hence 3 outputs per output.
All logic to prevent you from wiring an input to an input, etc, is also present and correct.

Also as a minor touch I've given the wires some 'wiggle' when you move them. subtle, but looks nice :)

I've also added a global output with note,volume and pan for nice automatic stuff like sweeps

iWasAdam

#27
OK, so now I've got it all working. I'd thought I would share a shot and thoughts:


All the wiring is done sorted with automatic deletion. you can see that wires going to other controls are shown thinner and go to their owner control - so you always know what goes to where.

so, the pic above shows sound (noise) being created internally (no samples or input sounds from disk of any kind). and two delays. the two delays are wired the same, so a wire from one will automatically connect a wire from the other - but they behave in series! So using two give a different sound that using one!

I'm not sure if the labels make sense. but if you know synthesisers and sound synthesis then they should do?
This leads to the next thought. This is a complex (but easy to use) synthesis back end. You can play, sculpt, mangle and adjust sound to your hearts content.

The above was created in about a minute with a further couple of minutes playing around getting the sort of sound I wanted. what is the sound? It's the creepiest helmet based breathing I've heard.  just using different nots (one in and one out). the faster you go it sounds like you are breathing quicker, slower with a longer attack and you are breathing heavier. The lfo gives a constant movement to the sound. it's all very creepy and in your face.

It would be great for a claustrophobic space based game.

So what's next?
Lots of tweaking
functions
Thinking about how the file systems should work
multiple voices

thoughts anyone?




iWasAdam

Functions now complete:  :o

They operate on each of the 9 controls (so there are 9 of them per voice)

In operation they remap the output of the current control to a user drawn version. You can also see there is a playhead which moves so you know what is going on.

So... every control has up to 9 outputs mapped to any available input. these are grouped in threes:
normal
inverse
the function being remapped
You can use all at the same time!

Again. I think that Derron can possibly see the advantage of using a visual version rather than attempting to use code to create a voice?  ;D

Derron

As written I am not familar with such stuff - and yes, for what you want to achieve a visual editor is surely of help.

bye
Ron