BlitzMax NG and Metal?

Started by therevills, June 06, 2018, 10:35:08

Previous topic - Next topic

therevills

Does BMX NG support it already?

Seems like the way forward for MacOS/iOS...  ???

Derron

If someone is brave enough to create a "max2dNG.mod" based on "bgfx" we would get vulkan/metal support (next to ogl, gl es, dx...)

Some talk was already done there:
https://github.com/davecamp/BMax-Shader-Framework/issues/1

bye
Ron

Brucey

I'm sure we'll get there.

I envisage an eventual move to (something like) bgfx to handle low level draws using whatever gfx backend is appropriate, in a Blitz-way that the developer doesn't "need to know" what they are rendering to, but that it "just works".

Derron

And once such an "upgrade" is made, we surely get some shader support for free - and or, render2texture etc. Such stuff could then get integrated as "basic feature" for all, without 3rd party modules. hmhmhm


bye
Ron

GaborD

That would be great and make BMX much more interesting.

Derron

@ GaborD
For now you could express your opinion about the shader framework code by Col for BlitzMaxNG. Especially what attachment points you would like to see, what convenience functions you would prefer ... .and so on.
Why am I asking? You seem to have experience with shaders - which many of us are lacking (for now) so we always benefit from some kind of expertise.

The shader framework project would then of course have effect on a potential new max2D-thing (if that creature gets awakened).


bye
Ron

col

I know, but correct me if I'm wrong :P that more texture formats need to be supported - in particular 2x8bits, 16bit, 32bit floating point plus maybe more. These are not directly necessary for display but for 'data textures'. If GaborD could chime in with the formats he could see would beneficial that would be great.

I was also thinking of an idea to extend the TPixmap formats (as above) and also have a TTexture with a TPixmap backing while still keeping the TImage for DrawImage etc, any thoughts on this approach?

BTW, my code is still very WIP and nowhere near finished. Input for additional features would be great but try not to critique on the base code as it isn't even ready for consumption just yet ... just ready for input and ideas for improving the syntax/api.
https://github.com/davecamp

"When you observe the world through social media, you lose your faith in it."

Derron

So the "TTexture" is kind of a TPixmap ("data blob" in RAM) while "TImage" stays the GPU-stored representation/variant?

Think that sounds reasonable - except we miss some "shader magic stuff" interferring there.


bye
Ron

col

#8
A TTexture would be a gpu representation of the CPU TPixmap except you won't be able to render it in the same way as a TImage. A TImage is 'graphical image' that you render, a TTexture would not directly renderable as it is. Thinking of it as a gpu TPixmap except you would be able to have say a single 8bit TTexture or a 2x16 bit (Red and Green channel), or a 1x32bit floating point TTexture for nice 32bit floating point precision. There are many texture formats that a gpu has and you wouldnt actually render as an image, because... in gpus a texture is not necessarily an image per se. As to why you would want these extra formats is literally to what effect can you dream up that you could do on the gpu and required data on the gpu to be in a particular format - your imagination is the limit as how you would use them. There are formats are ready for compressed data... ie you dump the compressed data into the gpu as is without expansion. There are also 4x32bit texture formats available on the majority of gpus nowadays too instead of being stuck with 4x8bit for eg.

Take a look here: https://msdn.microsoft.com/en-us/library/windows/desktop/bb173059(v=vs.85).aspx to see the range of formats available for Dx... I'm sure GL would have a similar list as it would be the GPU offering them. You also have to consider that some gpus may, or may not, be capable of some of those formats.

EDIT: It all depends on how far and what level of control people want... you could go all out and just offer TBuffer instead of TTexture. It might actually be easier to implement the TTexture in that way anyway... then you can have your own vertex and index buffer formats in the TBuffer base type too... ideal for custom particles on the gpu. As I say... do people want a premade set of shaders where they tweaks some  variables ( I personally dont prefer this, but it's not for me, its for you guys ) and/or do you want the ability to create your own shaders with full control over the pipeline too?
https://github.com/davecamp

"When you observe the world through social media, you lose your faith in it."

Derron

@ TTexture
Ok, so it is what I first thought but deleted from my post to not stand there as a noob knowing nothing but false information. Argh. :-)


@ TBuffer
TTextureBuffer for less ambiguity - or TTextureData.


@ Premade Shaders
You should be able to override stuff (replace existing with extended stuff). Depending on how "hardwired" specific stuff becomes the basic module should either already include a mechanism to be able to add stuff - or to be at least replaceable by custom/individual code allowing such things then (some "individual shader manager").
i assume it would be great to have some kind of default render pipeline. The default pipeline consists of preProcessing, content rendering, post processing. Each of this steps is a instance of TRenderPipelineStep or so - and you can replace them easily with your custom solution.

People like our beloved GaborD could now hop in and tell us what "steps" are necessary. The "TRenderPipeline" would of course also be an extendable class so that the whole pipeline could get rewritten (the basic module might at the end even offer 2 premade pipelines: classicSequential (no preprocessing calls, no postprocessing runs...) and a one like described above). Of course this is only needed if the approach of a dynamic pipeline (eg. an array of steps) creates a measurable performance overhead - als a standard pipeline is sufficient (as it could get replaced nonetheless).


So in short:
- TRenderPipeline
- - contains a list/array of TRenderPipelineStep elements (preprocessing, actual rendering, postprocessing, ...)
- - has a property "activePipelineStep" at which a currently done "DrawImage" (etc.) is redirected to (TRenderPipeline implements a method "RenderElement(...)" which then inforoms the active pipeline step or does something different in a custom implementation)
- - has helpers to add/remove/replace steps

- TRenderPipelineStep
- - is called by the pipeline during render, preSetup, postSetup ...
- - ...

Hmm I better stop than beginning what I wrote initially about: writing dumb/senseless stuff.


bye
Ron

GaborD

#10
The system doesn't need to support too much initially I think. I'd rather have something that is simple and straightforward, keeping control in the user's hand, but still fairly high level.

Just standard shaders with a robust rendertex system (so that we can chain them in a specific order - this is so important and some engines have trouble with it) and maybe easy to use viewports (to only render into a part of the rendertex) would go a long way and already give you an advantage over most engines.

I am not an expert in the low level stuff, so excuse me if I am a bit vague (or wrong haha) about things. This is my technical artist point of view without deep programming knowledge:

I agree with what Col said about texture formats, would be great to have more than the usual 8bitRGBA.
The amount of workarounds I use to for instance pack HDR lighting or to pack the data for the GPU particles is crazy. Simple format support would eliminate the need, lower texture counts, and generally make things much easier.
Would be great to have 8bit, 16bit, FP16, FP32. 1channel and 4channel, if possible 2channel too. Those cover most uses I can think of and give a nice range in terms of bandwidth. (important to be able to minimize throughput)
Not sure if anyone needs/uses specific depth formats. I always found it easier to store depth (for DoF, water, fog, atmospherics) in the alpha of the main render and save the extra lookups. But that's just my personal approach.

You would also have a leg up on many engines if you correctly support 3D textures. Annoying to have to use a several line function just to do something simple like color grading (which is just a simple fetch if you have 3D textures)
Plus all the fun other uses like 3D noise, voxels, volume AO, etc. They just make so many things much easier.
Bonus points if you can build a volume tex from rendertexture layers at runtime, or even better render into a layer directly (not sure if the second one is possible).

I guess some people will also ask for cubemap support. (can be useful if someone for instance uses cubeMapGen+ for light probes).

And there should be a way to transfer textures/rendertextures reliably between GPU and CPU on command so that we can for instance save them to disk or use them as data CPU side too.
NB for instance had a copy function that simply copied a texture into an array (or vice versa). Doesn't get simpler than that.
Obviously not something you want to do with big textures every refresh haha, the bus is a bottleneck afterall. None the less extremely useful.
Rendertextures should be GPU side only unless copied. To keep things fast and clean. 99% of their usage will be shuffling data around purely on the GPU.

DXT/BCx support would be great. Saving three quarters of your VMem makes a huge difference. Some of the newer formats also support high quality normalmaps. I think BC6 is also the only format that supports compressed HDR data.
BC6 and 7 is DX11 level hardware only though, so there is that. :)
Compressed DDS also directly loads to the GPU without CPU side unpacking and fiddling, which gives a huge texture loading speed boost. (last test I did in NB was a 5x to 15x difference between TGA, BMP, JPG, etc and DXT DDS) Huge difference for runtime streaming of huge worlds for instance.

Another underrated feature is support for anisotropic filtering. Makes a quite noticeable visual difference and the speed cost is usually not too bad.


A second step could be support for modern things like geom shaders, tesselation shaders, compute shaders.
Those are a nice to have cherry on top, but in my opinion not a deal maker or breaker.
Tesselation being the most important I think.

I would love to use a system like this, so many demos, frameworks and fun programs that could be made :)
NB was the only engine that got it right so far in my opinion, even if some of the things sadly never got finished.

Derron

@ GaborD
Could you try to think more "engine wise" rather than "result wise" ?

So instead of "features" (texture formats, shader variants, ...) describe how the engine should be callable, what you need to be able to "setup". This is what I described as "RenderPipeline" (albeit I may misassume things there...).

What I got is that you want to have convenience functions to retrieve gpu textures (copy them to cpu for cpu based manipulation or to save stuff). You want to have a manipulateable viewport to render stuff to.
But what else?


bye
Ron