SyntaxBomb - Indie Coders

General Category => Worklogs => Topic started by: iWasAdam on July 16, 2018, 12:16:32

Title: New audio subsystem
Post by: iWasAdam on July 16, 2018, 12:16:32
ok. there are no pictures or anything to show.  8)
I'm working on a new audio subsystem and thought I'd share a few thoughts and open it up for anyone to comment with thoughts, suggestions etc.

if we take a general look at audio. it is usually the following:
1. load a sound
2. play the sound
3. if we are lucky the sound may vary in volume pitch and pan
4. caveat: pan can (usually) only apply to mono sounds
5. sounds can be looped - but only the whole sound - not loop points, etc

So here are my thoughts:
1. have a system where the audio is exactly the same across platforms (Monkey2 has serious issues with MacOS playback)
2. sounds (mono and stereo) can be properly panned
3. loops can be programmed
4. different playback options - reverse for example
5. FX busses such as filter, echo, delay, reverb
6. different sound generation systems: granular, wavetable, vector
7. envelope and other parameters (LFOs)
8. possible sound design from blocks. drag/drop to create new sound systems with user controlled routings?

That's my list. any thoughts?
Title: Re: New audio subsystem
Post by: col on July 16, 2018, 20:25:11
Hiya,

Just a thought...

When I've played with sound effects and music samples I've always thought that having some kind of callback mechanism for when a sample, or more specifically a section or timing of a beat, is played could be useful.

For eg, instead of code triggering to play a sample, the reverse would happen, a long sample is played with say a bass beat and when the beat is hit a callback can be triggered allowing 'that beat moment' to be known in code.
Title: Re: New audio subsystem
Post by: Derron on July 16, 2018, 21:00:46
Callbacks could be added everythere - but maybe this could also be done in the "TSound"-object-extensions then.

Talking about stuff like auto-volume-adjustment for fake-positional-audio (think of a "GetVolume()"-callback feeding a volume depending on the soundobject-parent's position on the screen towards the player's screen position).
Similar stuff as for volume could be written about "panning" - so panning position/settings could become dependend on certain callback results/adjutments.


Regarding sound routes: as soon as you have a "player" and some kind of "API" for the sound objects (requesting data buffers, informing about play, pause, stop ...) you only need to provide basic capabilities - avoids workload on your shoulders.


bye
Ron
Title: Re: New audio subsystem
Post by: iWasAdam on July 17, 2018, 07:40:08
mmm, currently I'm working with just getting the base audio working.
I've got a basic OpenAL, but there are some timing issues, so I'm going to give a SDL_Audio one a try as well.

The essence of both is the same:
Stereo RingBuffer being filled with sound data

I like the concept of the audio call 'you' back and also auto position stuff...
Title: Re: New audio subsystem
Post by: iWasAdam on July 17, 2018, 10:48:34
I was thinking of something along these lines:
(https://vjointeractive.files.wordpress.com/2018/07/screen-shot-2018-07-17-at-10-44-34.png)

where the red lines are left, light green are right.

In this example the generator would be a mono sound.
The left channel then goes into a ToStereo where the right channel is just a copy of the left channel
And finally output
Title: Re: New audio subsystem
Post by: iWasAdam on July 17, 2018, 15:23:18
I think I'm sort of getting my head around this one.

In essence you will select a source, and then add modifiers (they will appear left to right). These will be processed in order until they come to the output.

The order of the modifiers will 'sculpt' the output sound differently, so having 'grunge' before 'delay, would give different effects.

Each source/modifier may also have outside inputs. You can feed these inputs from outside stuff like envelope, lfo and fader controls.


This sort of means that each voice would be a programmable playback device (a sort of Uber synth).


I have already extended MX2 to support a much higher range of output frequencies that map to 10 octaves: From really low (almost sub sonic) to FAAAST.

First modifiers are:
BitCrush - crushes the sound so it can be more 8bit sounding
Grunge - sort of messes up the sound a bit like a fuzzbox
Distort - add a nice distortion - great when used with grunge
StereoDelay - takes a stereo source and delays one channel, or takes a mono source and creates a new stereo channel with delay.


I'm not sure how many to allow in a 'chain' probably 3 or 4 would be a good amount?


Title: Re: New audio subsystem
Post by: Derron on July 17, 2018, 20:08:06
Why should you need to "allow" an amount of elements in a chain?

Type TSoundProcessor
Field input:TSoundProcessor
Field output:TSoundProcessor
End Type

...

Chain as much as you want. The final output element has an input-socket of type TSoundProcessor too - and all "SoundElements" (data, waves, ...) have an "output" linking to "TSoundProcessor". Then you create a stub-TSoundProcessor which just acts as connector between "SoundElement" and "SoundOutput".


bye
Ron
Title: Re: New audio subsystem
Post by: iWasAdam on July 18, 2018, 06:21:31
yep that's one way to do it :)

Title: Re: New audio subsystem
Post by: iWasAdam on July 19, 2018, 08:32:09
slowly moving stuff around and getting things operational.

A chain can now be of any length, but I'm sticking to max of 6 at the moment.

One major change with making things modular is the sound generation itself. this was going to be sort of:
Generator then the FX chain.

I suddenly thought that the sound sources are just FX as well, so now they could appear anywhere (or not at all) in the chain. it also mean that multiple generators can be used. I'll need to test this... hmmmm

Yep. it works!!! Just got a stereo sample playing forward AND backward at the same time (this is not 2 separate voices just a single voice with 2 generators in the chain)!
Title: Re: New audio subsystem
Post by: Derron on July 19, 2018, 10:02:41
Yes I originally wrote too, that the "TSoundProcessor/TSoundEffect" could be used as input too. So you have something which somehow generates/manipulates/plays "wave data" - and this gets plugged into an "output". The Output is then something which adds compatibility with existing playback-providers (eg. the sound engine in Monkey 2).

The basic idea is then:
- have something which can playback a buffer (-> output)
- have something which somehow fills a buffer (-> the sound processors / effects)

So it is up to the "processors" to emit events/call callbacks if they start all over, generated something, got hooked to something ... they might even add callbacks to the predecessor sound effects (to get informed if a "parent" has done something of interest).

The output then emits events/calls callbacks too (buffer got refilled, processor/effect as input got attached, ...).

As inheritance can lead to cyclic dependency you either need to have an "interface" (API...) or use some kind of "TSoundOutputBase" which defines all the callbacks already. This way each TSoundProcessor/Effect could import the TSoundOutputBase class without needing to know about the actual implementation (as TSoundOutput needs to know about TSoundProcessor this would elsewise lead to cyclic dependency).

At the end of the day the SoundOutput (or its extending ancestors) would be able to know/react to SoundEffect stuff - while SoundEffect stuff can react to SoundOutput things. It is not really needed but maybe one can come up with some useful examples. Another way is the classical hierarchic-information:
output informs attached sound effect ("input") ...
attached sound effect informs attached sound effect ("input socket")
... do this until no further input socket is filled (mostly the sound generator / file loader/streamer)

Depends on how you want to layout object connections and knowledge about each other.



BUT: what does all of this have to do with an "new audio subsystem" ? You are talking about a "TSound"-object generation. Means it _could_ just be build on top of existing code and extend the "Sound container" while adding some clever "need to refill buffer"-stuff. The "callbacks" we discussed a bit here are what is needed to get added to existing sound systems so extending TSound-objects could handle stuff properly (eg. streaming data of a music-file).


bye
Ron
Title: Re: New audio subsystem
Post by: iWasAdam on July 19, 2018, 10:46:53
It's NOT TSound, it's built from the ground up!

Base design is:
Voice is a single stereo playback unit. it contains all the code and logic to create (from either input sample or generated sin, etc), and all the FX code. a generator is just treated as an fx. so there are no generators, just fx.

Synth deals with the buffers, and voice mixing (currently it is just one voice), actual audio output, etc

so in use it is:
(in new)
mySynth = new Synth
update = 120 times a second

load sound0 from disk, or create a waveform (sound0) , whatever

(in update120)
mySynth.UpdateAudio()


(when you want the sound)
mySynth.Pitch( _pitch )
mySynth.NoteOn()

none of this will really do anything as you would need to have a basic voice architecture defined (how the voice is created from a chain) - I should add a basic default chain here... PlaySample > Pan


The key here, is that it is actually operational and solves the main issue with MacOS and Monkey2 borked audio system
Title: Re: New audio subsystem
Post by: iWasAdam on July 19, 2018, 11:12:54
thats the basic stuff.

the more interesting stuff will come with LFO and Envelopes...
Title: Re: New audio subsystem
Post by: Derron on July 19, 2018, 11:26:34
so it is not about the "audio playback" (which what I as "normal"-non-audio-enthusiast-developer understand as "audio (sub)system") but about the "audio object" (waveform or "data") and however it can get created.

Think I misunderstood what you are doing - but, hmm "it is NOT TAudio" - so it is ... what are you rewriting? How "audio content" is created (streamed files, dynamical created tunes,... ...)? How "audio content" is played (position dependend volume, panning, loops which inform the "creator" too)? How "audio content" is output (OpenAL, Alsa, PulseAusio, LibJack, DX, ...)?


bye
Ron
Title: Re: New audio subsystem
Post by: iWasAdam on July 19, 2018, 11:54:08
ok.
Sound is loaded as a mono or stereo 16 bit waveform.

say. drumclap.wav

usually the sound is packaged and sent to a channel. the channel then plays the sound through the connected device (usually openAL).
Again this means the device creates an internal sound and plays it.

But you can do it other ways:
a ring buffer
in this case you create an openAL sound that continuously loops. You then give it a buffer and feed the beffer with sound data that you are in control of.

if you need pitch control you will need to figure out how to do it and feed it to the buffer.
if you need panning, then you will have to figure out how to do it and feed that to the buffer

NOTE
BlitzMax has a very nice sound system that did exactly the above with a ring buffer being fed correct data. it was stable and could be extended but could get a bit buggy at times. But you could pan stereo samples, etc.

OpenAL wont let you pan stereo samples, only mono samples. and there is something borked in monkey2 audio that causes mono samples to loose about the first 512 bytes of data in any sound being played.


So the new Audio subsystem is an openAL ring buffer being fed custom data. so to get ANY sound output you need to write the correct data to the buffer and make sure the buffer is kept fed with data other wise you will get 'bad' audio cracks, pops, dropouts etc.
If you need to alter the volume, pitch, pan, etc. You need to actually write the code itself to feed the buffer

Here's the code for dealing with a mono sample:
local _temp:double = double(sound.GetSampleMono16( sndPos Mod _sndLength )) / 65536

you then need to write _temp to the buffer:
buffer[ position ] += _sndL * volume

and finally output the buffer to the ringbuffer

And... You've got to do constantly - you cant have a null buffer or you will get a crash!
And buffers must have enough data to be happy and as little data as possible to prevent latency (latency is when you call a sound and there is a slight delay before it appears)



Title: Re: New audio subsystem
Post by: iWasAdam on July 19, 2018, 12:26:10
Here's a demo of MacOS OpenAL BadSound:
https://soundcloud.com/mavryck-james/bad-sound (https://soundcloud.com/mavryck-james/bad-sound)

This is a mono clap. it sound fine, but the beginning has been lost....!

Here is the correct audio:
https://soundcloud.com/mavryck-james/good-sound (https://soundcloud.com/mavryck-james/good-sound)

You can 'hear' the second sound has a more punchy beginning (it plays the entire sample).
The first one is softer as the beginning is missing!

one thing to note is:
- the first one (the soft bad one) is the default monkey2 sound system on MacOS (windows and Linux dont have this issue)
- the second one is the new audio subsystem!

Title: Re: New audio subsystem
Post by: Derron on July 19, 2018, 14:41:02
I see and hear the differences.


> And... You've got to do constantly - you cant have a null buffer or you will get a crash!
Are you sure about this? Isn't a buffer some kind of "memory block" which the audio "device" (OpenAL et al) plays over and over. When we played with "ogg streaming" in BlitzMax (old blitzmax.com days) it was all about filling the buffer ahead of time (eg. you got a 2 second buffer - if the buffer was played at position "1 second" then you streamed the next second in the 0...1s portion of the buffer). If you did not do that you ended up with a looping sound consisting of different sounds (new suddenly stops and the old one plays until the new one starts).

So as I thought: your audio system is more about bringing the "audio data" to the output - the output itself is provided by eg. OpenAL.


Do not forget that everything has to be done in an external thread as else the buffer won't get refilled in time if you do some cpu-heavy stuff - eg. do some image-loading. For that ogg-streaming-thing this means that the provider of new data was outsourced into a c-file as "vanilla blitzmax" is building "non threaded" by default (NG does build threaded as default).


bye
Ron
Title: Re: New audio subsystem
Post by: iWasAdam on July 19, 2018, 15:01:20
QuoteI see and hear the differences.

You know. I shouted and shouted that this was happening on the monkey2 forums. even posted apps, examples, suggestions. you name it I did it. And Mark did NOTHING!

I even wanted to See if I could employ him to make the changes and he didn't even reply... Go figure  8)
Title: Re: New audio subsystem
Post by: Qube on July 20, 2018, 05:15:54
Crikey! The MacOS version sounds like it running at half the bit rate with volume cutoff. Is it really that bad?

QuoteYou know. I shouted and shouted that this was happening on the monkey2 forums. even posted apps, examples, suggestions. you name it I did it. And Mark did NOTHING!
I could dive into a speech about this but I'll simply say this... Just ditch Monkey2 and find something else. If the developer can't be arsed then why should you put the effort into the language?
Title: Re: New audio subsystem
Post by: iWasAdam on July 20, 2018, 06:11:17
I went back to soundcoud and looked at the output from both examples. I didn't alter any volume, etc and used the same capture methods on both. but they do 'look' very different. which as you say feels like half the bitrate, etc.

Yep, and you're right about giving up on monkey2. I now use MX2, which is my monkey2 fork, with my own custom editor, etc. The only thing I know that's coming is Apple ditching OpenGL. That would mean shifting completely from mx2 to another language.

One really good thing that can be said is I have been forced into really learning and getting a grip on some interesting stuff. I thought I knew audio, but now I have complete control of it in a way I didn't think I could...
Title: Re: New audio subsystem
Post by: iWasAdam on July 20, 2018, 11:17:52
Now starting to fit things together and work on the UI:
(https://vjointeractive.files.wordpress.com/2018/07/screen-shot-2018-07-20-at-10-53-45.png)

so...

- Above shows an input (this is actually the keyboard which maps to a music keyboard spanning 10 octaves).
- The there is the 'SynOsc'. This is a tweaked 4 voice sound source, with each voice being a sin wave each an octave higher. this is sort of like organs stops. Each voice has a mix, and amount, plus an over smooth ratio.
- Next up is a sample. this is actually wavetable, but just being played as a straight sample

and how does it sound?
https://soundcloud.com/mavryck-james/synosc (https://soundcloud.com/mavryck-james/synosc)

Here you can really start to hear the possibilities. the first part is just some random notes, the second part (listen closely) shows some modulation of the amount going to the first oscillator, and the last part just some low notes - you can also hear clicking there at very low frequencies with this generator.

And here is the SynOsc on its own without any other sound source:
https://soundcloud.com/mavryck-james/just-synosc (https://soundcloud.com/mavryck-james/just-synosc)
Title: Re: New audio subsystem
Post by: iWasAdam on July 21, 2018, 10:57:28
ok now I've got something to show...
(https://vjointeractive.files.wordpress.com/2018/07/screen-shot-2018-07-21-at-10-52-04.png)

top line shows 4 options:
Sources, FX1, FX2 and Shapers (shapers being currently selected)
to the right you can see the available 'blocks'

the bottom section shows the current audio block chain (in this case showing keyboard -> SynOsc -> To Stereo)

to create just drag/drop from the top to the bottom, or drag the blocks around. everything automatically sizes for you.
Icons disappear and become attached to the mouse, and places light up when you can drop them.

I thought this was an obvious a simple way to create new chain.

Thoughts?
Title: Re: New audio subsystem
Post by: Derron on July 21, 2018, 12:18:30
This might be useful for the "visual coder" or in this case "visual music creator". Most of us surely just want to add effects to existing wave data (some ogg SFX files, some background music or "noise" like an ambient wind sound blowing from left speaker to right speaker). For that (me included) the visual aspect of a "music/sound composer" is not that important as the framework/api behind.


bye
Ron
Title: Re: New audio subsystem
Post by: Naughty Alien on July 21, 2018, 16:52:36
..im sorry guys, just a quick side question as its sound processing related...what software for sound editing/sfx you use (i guess freeware) ??
Title: Re: New audio subsystem
Post by: iWasAdam on July 22, 2018, 07:40:02
@ Naughty Alien
I use my own software for creating, adjusting audio, cutting bits, etc:
https://adamstrange.itch.io/qasardio (https://adamstrange.itch.io/qasardio)
I don't use any software for layering as I have a large library to start with, etc.

@ Derron
Yep. I can see what you mean, but you are thinking about audio as a simple wav file you just load and play. Think of audio as audio  synthesis and you are getting closer. This is a complete realtime block synthesis. where you define the actual synthesis method. Its a bit like instead of playing a sound, you create it, you can sculpt it, mangle it. whatever.
It's not a synthesiser. It's a customised rack - you pick what you want and how you want it. hence the visual side is a much better way to deal with it.

Here's that pesky wind example you wanted:
(https://vjointeractive.files.wordpress.com/2018/07/screen-shot-2018-07-22-at-07-25-40.png)

You can now see the faders being added in with their parameters. So we start off with white noise. that goes into a filter where we can sculpt the sound of the incoming noise to sound more like wind. and finally add some delay to thicken things up and give it a bit more...? Something?

That would still not be too interesting, so you would need to add some modulation into things. These would hook directly to the faders and automate things for you (I haven't started those yet)
Title: Re: New audio subsystem
Post by: iWasAdam on July 27, 2018, 11:27:49
lots of tidying up and development of the sound sources and FX.

Now starting to turn my head to the control parameters:
(https://vjointeractive.files.wordpress.com/2018/07/screen-shot-2018-07-27-at-11-10-52.png)

You can see on the top row we have the available controls - these are the same for every voice/synth
coming down LFO1 has been selected and is showing it's available parameters
These are arranged in 3 sections:
1. input - is there any control coming into it?
2. control - what controls are available? These are mainly faders but a few buttons.
3. output - here there will be the available outputs, which you can drag/drop to the inputs of the synth

I think that Derron can start to see why (in this case) having it all visual make sense?

The output can be mapped to ANY synth input, which will update visually. so you can see the LFO being applied to the synth parameters!

So to recap:
synth/voice is user definable. it could be simple sample playback, or a mono synth, or a drum synth, or a wavetable, or any mix of them. (turn up the speakers, create a bass synth and blow your walls down!)

lets say you want to have  a piano sound that decays in volume? You will need to use a envelop on the control section, and link it to (well wherever) anywhere you want

Then just call it with Synth.NoteOn( pitch )



Title: Re: New audio subsystem
Post by: iWasAdam on July 27, 2018, 14:45:19
the thing missing from the previous example was how do you link everything together...
(https://vjointeractive.files.wordpress.com/2018/07/screen-shot-2018-07-27-at-14-43-08.png)
;D :o
Title: Re: New audio subsystem
Post by: iWasAdam on July 29, 2018, 15:16:26
ok. still banging aways at stuff.
I've now got the correct 'socket' appearing and the ability to link them together with wires:
(https://vjointeractive.files.wordpress.com/2018/07/screen-shot-2018-07-29-at-15-10-40.png)

the base LFO is operational, but only the UI wire are connected, need to added the internal routings now. the core is there, but not the code itself.

As shown above, each ne wire added has a different color. and each socket only support one wire. hence 3 outputs per output.
All logic to prevent you from wiring an input to an input, etc, is also present and correct.

Also as a minor touch I've given the wires some 'wiggle' when you move them. subtle, but looks nice :)

I've also added a global output with note,volume and pan for nice automatic stuff like sweeps
Title: Re: New audio subsystem
Post by: iWasAdam on August 02, 2018, 07:03:43
OK, so now I've got it all working. I'd thought I would share a shot and thoughts:
(https://vjointeractive.files.wordpress.com/2018/08/screen-shot-2018-08-02-at-06-38-43.png)

All the wiring is done sorted with automatic deletion. you can see that wires going to other controls are shown thinner and go to their owner control - so you always know what goes to where.

so, the pic above shows sound (noise) being created internally (no samples or input sounds from disk of any kind). and two delays. the two delays are wired the same, so a wire from one will automatically connect a wire from the other - but they behave in series! So using two give a different sound that using one!

I'm not sure if the labels make sense. but if you know synthesisers and sound synthesis then they should do?
This leads to the next thought. This is a complex (but easy to use) synthesis back end. You can play, sculpt, mangle and adjust sound to your hearts content.

The above was created in about a minute with a further couple of minutes playing around getting the sort of sound I wanted. what is the sound? It's the creepiest helmet based breathing I've heard.  just using different nots (one in and one out). the faster you go it sounds like you are breathing quicker, slower with a longer attack and you are breathing heavier. The lfo gives a constant movement to the sound. it's all very creepy and in your face.

It would be great for a claustrophobic space based game.

So what's next?
Lots of tweaking
functions
Thinking about how the file systems should work
multiple voices

thoughts anyone?



Title: Re: New audio subsystem
Post by: iWasAdam on August 02, 2018, 14:46:52
Functions now complete:  :o
(https://vjointeractive.files.wordpress.com/2018/08/screen-shot-2018-08-02-at-14-41-09.png)
They operate on each of the 9 controls (so there are 9 of them per voice)

In operation they remap the output of the current control to a user drawn version. You can also see there is a playhead which moves so you know what is going on.

So... every control has up to 9 outputs mapped to any available input. these are grouped in threes:
normal
inverse
the function being remapped
You can use all at the same time!

Again. I think that Derron can possibly see the advantage of using a visual version rather than attempting to use code to create a voice?  ;D
Title: Re: New audio subsystem
Post by: Derron on August 02, 2018, 15:17:38
As written I am not familar with such stuff - and yes, for what you want to achieve a visual editor is surely of help.

bye
Ron
Title: Re: New audio subsystem
Post by: iWasAdam on August 03, 2018, 12:05:03
yep. I't not a simple thing to explain. lol.

I'm sorting out some wavetable stuff ATM. Finally got my hands on the original PPG wavetable files and am tweaking how to deal with it so it is sice and simple and usable.
Title: Re: New audio subsystem
Post by: iWasAdam on August 05, 2018, 11:25:49
PPG wavetable stuff all sorted along with some new filter stuff

Now for the BIG ONE...

All updated and cross compiled on Windows and Linux without any issues!  :o
Title: Re: New audio subsystem
Post by: MikeHart on August 09, 2018, 08:43:06
Wow, that is looking really good.
Title: Re: New audio subsystem
Post by: iWasAdam on August 09, 2018, 10:44:15
thanks Mike.   ;)

the backend is using some SDL audio. does Cerberus support SDL2?
Title: Re: New audio subsystem
Post by: MikeHart on August 09, 2018, 15:51:44
Quote from: iWasAdam on August 09, 2018, 10:44:15
thanks Mike.   ;)

the backend is using some SDL audio. does Cerberus support SDL2?
CX uses GLFW and OpenAL right now.
Title: Re: New audio subsystem
Post by: Qube on August 09, 2018, 17:48:15
coming together very nicely I see :) - Are you going to be offering it to the M2 community as I'm sure there are others who would want a working sound system on MacOS?.

Barely touched Monkey 2 myself and if as you say the sound is iffy on MacOS then meh :P
Title: Re: New audio subsystem
Post by: Derron on August 09, 2018, 18:22:05
@Qube
It is not a "sound system" (in the sense of "PlayAudio(file.ogg)") but a "sound generator/machine".

Think there are less people being able to make use out of it - but if they are able to, then they surely find it useful.


bye
Ron
Title: Re: New audio subsystem
Post by: iWasAdam on August 10, 2018, 06:48:04
QuoteIt is not a "sound system" (in the sense of "PlayAudio(file.ogg)") but a "sound generator/machine".
Actually Derron. it is! (if that's the way you want it to be)

It won't stream - but I could add that - but I can't see a reason for that (in this case).

All of what you have seen can be thought of as the internal structure underneath PlayAudio().
Usually (when you call PlayAudio) there is a lot of code you never see that does all the nasty stuff like loading audio into the correct format and setting up the audio driver and feeding the driver.

This is a new subsystem. But one that you can see (because of its complexity) and rewire, redesign.

It may be that the end user never see this side of things... ?

The relative command to PlayAudio("cccc") would be (something like):

synth = New Synth
sound1 = synth.LoadAudio("ccccc")
PlayAudio( sound1 )
Title: Re: New audio subsystem
Post by: Derron on August 10, 2018, 06:54:32
So it is an "audio feed generator".
You might consider adding "stream support" so people could use it to get rid of the "cracks" or "missing begin" on Mac OS X.

For such things you need to create helpers to ease the pain of mixing audio (set pane, individual volumes, ...). Means a useful group of commands for "play an ogg file"-users).


bye
Ron
Title: Re: New audio subsystem
Post by: Qube on August 10, 2018, 22:18:05
Quote from: iWasAdam on August 10, 2018, 06:48:04
QuoteIt is not a "sound system" (in the sense of "PlayAudio(file.ogg)") but a "sound generator/machine".
Actually Derron. it is! (if that's the way you want it to be)
I thought so ;D
Title: Re: New audio subsystem
Post by: iWasAdam on August 14, 2018, 15:11:03
so. latest update is everything is working and I'm starting on the final big concept... The FX

The simplest way to think of this all the sounds (voices) can be fed into (up to) 3 FX units. These are the same as foot pedals when playing guitar:
(https://vjointeractive.files.wordpress.com/2018/08/screen-shot-2018-08-14-at-15-01-57.png)

each single voice has three fx sends (how much of the output to send (feed) into any FX - this is shown above by the FX panel with the 3 fx feeds.

I have (currently) set aside 5 fx including reverb, delay, chorus, etc. These are shown on the top.
You click and drag a pedal onto a blank pedal position below it and the faders automatically fill themselves in for you.
FX are global - they affect ALL voices, but you can set the amount per voice...

In use it could be something like this:
footsteps (this is our voice)
we want to have different levels of echo if we are inside or out
so we use an echo fx in slot 1

when we want to use lots of echo we would use:
footsteps.FX1 = 1
or no echo
footsteps.FX1 = 0
or some echo
footsteps.FX1 = 0.25


Title: Re: New audio subsystem
Post by: Qube on August 16, 2018, 00:37:25
Quoteso. latest update is everything is working and I'm starting on the final big concept... The FX
Looking forward to seeing this all in action in your next game :)