Open AL Issue

Started by Hardcoal, February 01, 2021, 21:04:17

Previous topic - Next topic

Derron

should clarify I bit: if you used OpenAL then there is no need to do the "recorded samples fetching" in a thread.

BUT ... depending on what else needs to be done there (eg some processing, saving to file) it might still be a good idea to do this stuff in a thread - this allows your app to stay responsive. You just have a little "progress bar" in the bottom of your app ("processing ...", "exporting to disc ..."). Of course one could do without threads - by just doing little steps each time ("process 100 bytes ... save it ... process main app .... process 100 bytes ...").

Just think of all this audio stuff of being a "task" - a task of many in your app. Then you would understand that "threads" can help there.


bye
Ron

Hardcoal

#31
I love the idea of threads but I've never used them.
I've got threadophobia.
Ill need to overcome it sooner or later.

But wait.. If it got its own thread.. Why this should be in a loop?
Or maybe it only cashes in thread mode?
I mean it should be turned on and off only, according to my logic
Code

Derron

Quote from: Hardcoal on March 11, 2021, 10:53:40
But wait.. If it got its own thread.. Why this should be in a loop?
Or maybe it only cashes in thread mode?

Can you please rephrase what you mean (exactly) ?

Talking about fetching samples from openal?
there are often loops doing stuff like "while data is in the buffer, fetch it" - this is to avoid the buffer running out of space. Of course this can be done in a thread too (to avoid hogging the main thread if reading is too slow compared to recording new data - which likely does not happen here).


bye
Ron

iWasAdam

in simple terms:

your base app has it's own 'game loop' which is sort of fixed to the display. so when the display refreshes, so does the game loop. it could be 60fps, or 75 or 120.

for audio you need a sound buffer which is always kept filled with data. when it reaches the top, the buffer contents are passed to the internal audio subsystem and the result is played. The buffer is then refilled.
NOTE
the buffer may be filled in bits. so lots of data here and possibly not as much there. but always in time with it being passed to the internal playback.
Now... The audio can have a buffer of any size, but a size between 128 -1024 is common. the lower the number the lower the latency of the audio, but the more times you must sedn the resulting audio to the internal systems. It's a trade off you decide when you need to.

OK. The above is called a 'ring buffer' and as you can see it runs much faster than FPS. for cd quality you are looking at 44100 times a second. So usually you have some form of audio thread that just deals with the ring buffer.

In Blitz Max/NG all of the buffers, etc, are hidden from you.

You are dealing with TSound, etc. all of these structures are handling the buffers, memory, sound storage, etc for you.

The moment you start directly creating realtime audio, you WILL need to deal directly with the buffer systems.


So what is OpenAL?
openAL is the internal device that speaks directly to the computer. the only thing it wants is a data stream buffer that is constantly filled (from a ring buffer).
it knows when you have not filled the buffer and it will complain - cause an error. so you need to use a thread to keep it filled ;)


Lastly what is an input stream - like the output stream (playback), it reads an input into an internal buffer. when the buffer is full - it flags a full buffer and allows you to copy the contents of this buffer into your app. whilst it refills the buffer.


Think of the audio as two buckets:
one you fill and it becomes sound out
one that is filled internally and is given back to you - recording


There are lots of different approaches to handling audio. There is even a very simple way that means you don't have to deal with buffers, etc. but that is another thing entirely.

The key is to work out what YOU want. Then do some research, figure out what can and can't be done simply and then take it from there.

NG gives you direct access to pan, pitch and volume through the channel system. generally using that is much simpler and can even go as far as making basic synths without touching any of the nasty buffer stuff. You can even set up a simple timer in your app to pump data to these giving you a huge range of control - but you have to have a basic grasp of WHAT YOU WANT first ;)

Hardcoal

awesome explanation "Ex" adam thanks derron as well..

Adam

yea derron i was a bit unclear , when i re read what ive posted i didnt understand what i meant either :)

yea.. im working on improving my english as we print..

anyway.. lets see what i can do with all the new information you guys gave me..

and move on from their
Code

Midimaster

As I understood  Hardcoal only wants to use the OpenAL for recording. Hardcoal, please descripe, what your app shall be able to do.






...back from Egypt

Hardcoal

#36
Im doing multi track recording of midi and sample..
Im trying to make educational app as well..
music games..
all sort of ideas..

Its a playground..

but it will be released first as educational or help tool for beginners in music.
Im learning music while doing it..


The multi track recording is on second priority level, but i want to master it too.


i also want to mention that i work on more than one project at a time.
Thats how i like it. (Two atm)
Its more Interesting to work on more than one project.. plus i learn from one and apply on the other.



Code

iWasAdam

oookkkkaaayyy....

I am sure that Midimaster will say something similar, but here goes...

mutitrack recording and playback of both midi and samples requires 2 things:
1. low latency
2. EXACT timing

To do this you will HAVE to use threads, understand how the audio subsystem operates, ring buffers, comprehend how MIDI actually works (pst is not straightforward and there are many many caveats to it) and then work out how to put it all together, then test, test, test - Plus you WILL NEED to have many different keyboards, midi equipments, synthesisers just to debug the MIDI!

And remember - with audio and midi, all of this happens WITHOUT the UI. it MUST operate completely automatically and in the background.

------

Not to sound like a party pooper. THIS IS A HUGE, IMMENSE AND COMPLEX TASK.

One last hyper important thing to think about is this:
There is a reason there are very few third party audio tools and even less sequencers: Audio is hard, and there is very little information about it. and those that do have this guard it like gold...

Hardcoal

#38
Ive already made my midi recording
on a basic level..
no modulations etc.. only notes .. volume and timing..

i have no intention to make it industry standard level..
im just playing around. and using you guys advices to make a progress.

Ive never messed with threads.. and im gonna experiment with it soon, sounds so interesting...and intriguing



Code

iWasAdam

#39
the one thing you MUST do with midi recording is test with multiple keyboards - they send out different midi data RE note on off!

Also... And this is even more important

If you intend ANYONE else to have access to this, your timings MUST be EXACT - we're talking milliseconds.

I'm speaking from bitter experience here. People who use MIDI will notice timing slips, they will complain and you will get pissed off.
That means when playing back recorded or entered note values, they will notice if a single beat is out of time.

Lets assume that you have a simple repeated beat played by a bass dum, 1,2,3,4,  1,2,3,4, 1,2,3,4, etc

and the 3rd beat in the 10th bar if off by 100ms - they will notice and complain.
but on your machine it all sounds fine
but you also notice every now and again a note time sounds slightly 'off' - they WILL notice and complain.

It doesn't matter how 'off'. the timing has to be EXACT.

If you think SyntaxBomb members can be a bit mean sometimes. End users (particularly music related) are loud and mouthy. Just be aware of that  :(

Midimaster

I think writing a MIDI app for himself and some friends is easy. But to write it for customers you will have to handle hundred of problems with different computers. The MIDI-interfaces and possibilities of end users are stubborn and often do not work as expected. And the only respons the user can tell you is a "It did not work!". So you should also write a protocoll system to find out what happened on a users computer.

From my practice I can tell you: The timing is not as problematic as iWasAdam expects. MIDI does not need a lot of performance and a variability of 30msec is exact enough for most purposes. The bigger problem is the 90-180msec a windows GM-MIDI-PORT needs to process your midi command until the speaker starts playing it.

The biggest problem is to coordinate the different latencies of all the devices when doing multitrack recording and playback simultaniously. When playing back audio and midi at the same time... how to guarantee the synchronicity?

When receiving MIDI-IN and Audio-IN... how to know the  synchronicity?

When playing and recording at the same time... how to repair (adjust) the incoming signal to the correct time point of the former outgoing sample?

Think about this situation: The software is playing as song. Each Byte is processed at  a certain time, but it needs 100msec until  the speaker plays this. Right now the musician can hear it and plays his second track. The AUDIO-IN buffer needs another 100msec until the software can fetch it there. now...Where to save this incoimng byte on the time-line? On each computer this latency is different, each day on the same computer the latency is different. Good luck.

...back from Egypt

iWasAdam

sending midi isn't an issue but timing of a sequencer IS. Particularly when you are also dealing with controlling the output of voices samples and midi. that is without realtime control added.

I've written a few realtime sequencers, the latest being QasarBeach
https://adamstrange.itch.io/qasarbeach

That was directly captured from the app with no added effects (apart from the visuals). You can see everything running realtime, effects, UI.

It took 3 rewrites of the sound system to get 'right'.
A complete transfer into threads once to get the timing 'right' - audio capture still not correct on some systems - particularly low end windows
midi was rewritten twice because some systems refuse to pick up midi instruments - unfortunately this can ONLY be tested by people with those instruments and setup, so you will need to have a close bond with your users.
The song sequencer internal was completely replaced and rewritten with the assistance of very skilled technical engineers who helped me reverse engineer old data formats and properly understand how linked sequences worked. <- you will have to come up with your own take on this and how it all fits together. but suffice to say it's not quite as simple as it seems.

I do have a variant build for NG, but the only way I could make the audio work was to rewrite the entire NG audio subsystem. something I would strongly advise you NOT TO DO.

The QasarBeach audio system was built completely from scratch.

I'm not saying you shouldn't, i'm just saying don't bite more than you can handle.

QuoteIve already made my midi recording , a month ago..
on a basic level..
no modulations etc.. only notes .. volume and timing..

Take things slow and focus on doing one small thing well, then add to that. You will learn as you go.

One last thing about file formats - particularly .wav
They are 'containers'. If you are parsing them yourself you will have to make decisions on exactly what formats to natively support, convert correctly. Unpacking compressed stuff is nasty and even I (along with many others) never support compressed wav.

Hardcoal

#42
I totally agree with you people..
im pretty far from writing midi for clients .. im doing it for myself.. and for a freeware  atm

I can understand when people gets pissed..  I get pissed too from stuff if they are justified.. or if i paid for them.

all i did up till now with the midi is simple saving the note and replaying it.. and it still sucks and not even..

the delay of midi only happen when you do calculations or using vst.. or else it is instant ..
I had zero delays.. because i only save on my computer the midi notes , and when  i do play . the midi plays the piano itself..

zero latency ..

all i saved in my midi file is raw 4 bytes..
but i would like to make a .Mid file reader .. for reading standards midi files..

At one point i will concider joining someone to my app of one desires to.
but like i said its now only a play ground..

1) For example i want to make a game that you get a chord name and you need to play it on the piano.. for learning chords..
2) I want to make a singing pitch perfect voice trainer..


im in a music school at the moment so thats what drove me into all this music thing.
plus i was never satisficed from any program ive seen for music editing..
they all were with issues to me

until i met MixCraft..  which is near perfect for me .. but it also has its stupid things..
but its 90% amazing..

anyway.. learning this music stuff.. will allow me to make my own music apps slowly..
and i can promise you there is a lot of stuff to be made that can be done.


btw im petshoyboys number one fan

QuoteTake things slow and focus on doing one small thing well, then add to that. You will learn as you go.

im very bad at taking things slow..
my system is make it work first than improve it later.


[cheers. shabat shalom, lol]


Code

iWasAdam

Quoteim in a music school at the moment
OH excellent - kudos to you :) You will be introduced to all sorts of interesting stuff there.

The fact that you can program gives you a huge advantage over your classmates. My advice is to think about what they are teaching and come up with something unusual and interesting to you. ;)

Also beware of people you say "YOU CANT do that ...", etc. Most of the advances were done by people who refused to listen (to dumb people) and tried to think differently.

OK. A little something for you to think about.
When the word 'sampler' is mentioned you immediately have a thought about files, playback, etc.

It was never conceived as that! The original design was for additive synthesis to create real sounding instruments (circa 1977). However the results never sounded 'real' so more memory was added to the development system (around 128k) and a recorded sound was transfered into the new memory and played back - that sounded 'real'
because it was such a small amount of memory and only a small sample of the sound could be transfered, the result was refered to as an 'audio sample'. Once the product was completed, the word sample and sampler was coined...

If I was to give you one solid bit of advice it would be this:
x86 as a platform is virtually dead. ARM (possibly Risc V) is the future of computing. Think about a dev platform that allows you to compile to an ARM based system - Pi 4 is great and cheap and would allow you to use stuff like DACs to build your own device. Or the new Apple M platform - but these cost and will lock you into them. Ardiunos are amazing and cheap but you will have to think a bit low level with them - but they can do amazing things and the communty is very good ;)

Don't reinvent a broken wheel (unless you want to do it for learning purposes). E.G. reading .mid files is really a waste of your time. The files are NEVER accurate, and the results always sound - meh!

Beware of anything you write to do with tuning - computers have a habit of doing what you tell them to do. So if you 'think' 14758 is middle C and it's off - the computer wont care. but you and everything you tune to your app will be 'off'. Buy a proper valid guitar tuner, or signal generator and  use that to test your frequencies are correct ;)



Hardcoal

wow thanks adam for this kind and detailed reply..
ill take all youve said under consideration..

some of the things youve mentioned i have no idea what you were talking about though..
i will have to google some of the terms...
but the big picture was clear..

as ive said.. i owe no one nothing. so i do it and learn from my mistakes..
everything can be fixed/tuned and so own..

If ill ever release it for serious musicians i will let them the ability to fine tune everything manually so they wont be able to complain to me :D lol

give the user as much flexibility as possible.. why not?
Code