General Category > Worklogs

VivaMortis - post Mortem

(1/2) > >>

OK. I thought I would do a post mortem for those who find this stuff interesting. I'll go over some of the concepts and some of the working. but feel free to ask for more detail about how and why things are done a certain way :)

I'm not going to say "do it this way", just "I did it this way.. and why"!

First off is my language choice:
Mx2 is my fork of Monkey2. I found there were some things missing from Monkey2 (yep I tried to have them integrated into Monkey2, but no luck there). So felt I need to fork it to add my own stuff.
What did I add?
Lots of colors, lots of additional graphics commands, font sprite support, added lots of strange little helper functions that I considered 'missing". Overhauled the sound system. lots of wierd little stuff to extend some of the base functions and shader support.

Next is my tools:
Generally I use the tools I code myself apart from some photoshop.
Here is a quick list of my tools and what they do
TED21 editor. It is a fork (not again) of the Ted2 editor which came with Monkey2. Visually and operationally it's very different and very fast (for me) to work with. Line numbers, code overview, colors are all displayed, module compile and lots of other odd little things I add as I need them.

PaletteEd - to create the palettes
FontSprite - this is a sprite editor which specialises in a 16x16 (256) grid of sprites. a FontSprite. These can be any size and you are not limited to single characters. It uses palettes from PaletteEd (although you can modify existing ones as well). Most of the graphi work is done here. It's fast and fit in very well with my workflow. There are multiple fontSprites used with different resolutions. the core being 16x16 pixels. Here is the 8x8pixel fontmap with the character set and UI maps stuff:

FontMap - this takes fontsprites and palettes and allows you to make maps. very similar to Tiled. a 'map' can have different layers each with it's own resolution. Each layer is just a set of x.y cells with a cell being a char and a color from the palette. How you use the layers and data is up to you. VivaMortis uses 7 layers:
4 layers are the basic room layouts. in each layer are 9 room types; corridors, big room, room with decoration, junctions, etc

1 layer is the map itself. the characters show the room and the color the 'zone'

2 layers are then used for monsters, items, etc. Again the characters and colors giving different results

layers can be hidden and shown together, so it makes 'seeing' the results a very quick operation:

One thing I would like to say is 'I think visually'. If I can see it... it makes sense. hence visual data and not text data for the maps, etc. But I can also see code that way too - code has a visual look and I suppose to me it all becomes pictures in some way? But my brain is wired differently from 'other' people.

OK. The last thing to speak about is the sound. This is completely custom written by me. It's taken sever versions to get to this point (this being version 4/5). It uses a custom editor called QasarBeach - this is a complete reproduction of the Fairlight Series IIx and then some. The only way to think of it is 'sound lego'. I' have lots of little block that I can join together. the order they are joined gives a different sound and each block has many inputs.
QasarBeach is a set of 16 block (voices) organised in a particular way and accessed through the Fairlight UI.
It has sound editing


and individual control for each voice

The underlying sound system was written to support many different sample formats from old 80/90's synths and samplers
QasarBeach is a very complex, but fast system and was written as a copy of the old Fairlight system (a lot of communication was had with the original people who created the Fairlight as well, and also with those who keep the machines alive).
VivaMortis is the first time the technology has been seen outside of the lab. It was also designed to be portable for me to use directly in games and other things...

OK. Thats all for now. Lots of images (I know - but I think in pictures) and lots to take in. If there is anything you would like to know more about. just ask :)

Ok so this was about "assets" and their creation.

What about the game logic itself, screen handling, idea finding...


oky-doke... screen handling it is then ;)

Lets get the distinction about window and screen:
The window is how the os sees things. could be a floating window or could be fullscreen. You don't really know the size of the window when you give the user flexibility to resize it (unless you specify it and lock it down), so a window could be 640x480 on one system and 1024x960 on another or even 2000x1500 on another with a retina display.

We now have a window render size Width and Height. But how do we treat this as screen with a fixed 256x192? We uses images/canvas objects.
In essence a canvas is a bitmap or image of a fixed size that we do all our drawing to. then we just draw this bitmap to the window (letting the language handle the stretching). Some pixels will get stretched in lines here and there - but in general it works.)

We are now dealing with a fixed 256x192 resolution and can treat that just like the original sized (spectrum) display.

So how do I get from 16 color to 1 color?
More involved as it uses a shader.
the shader takes everything that isn't black and makes it the input color, so you just set the input color run the shader and your 16 colors instantly becomes mono...

But you have the UI that is multi colored as well?

Yep. but think of it like this:
layer/bitmap 1 is the ui and everything that is not to be converted by a shader
layer/bitmap 2 is just the map

when you come to render, you do the following:

draw bitmap 1 to the window

if we want spectrum mono
draw bitmap 2 with mono shader to the window
draw bitmap 2 normally to the window

(on top of bitmap 1)

Here's the actual code (where canvas is the window, canvasImage is the UI, canvasImage2 is the map ):

--- Code: --- canvas.Color = Color.White
canvas.DrawRect( 80, 40, Width-160, Height-80,  canvasImage )

If _spectrum Then
canvas.Color = _palette.GetRGBA( _roomCol )
canvas.DrawRect( 80, 40, Width-160, Height-80,  canvasImage2, _myShader )
canvas.DrawRect( 80, 40, Width-160, Height-80,  canvasImage2, _myShader2 )
End If

--- End code ---

Monkey2 need to be recoded to allow different shaders to be used on images. Hence MX2

idea finding - ohh a hard one.

My first concept was for a character based scrolling shootem sort of like a big sci-fi ghouls and ghosts. but on seeing how the entries were coming along I thought this is going to serious competition. I need to rethink and play to the specs to make something that would stand out.

My first thought was for an isometric version of SabreWulf. but this was shot down as the graphics used were considered no be too close to knighttlore, I.E. Ripped off. Do I needed to go and work on some variations of the graphics.

How did mortis appear?
My first attempts were to cowel the figure I had. and the face was great, but removing it was even better (whilst I was drawing it). So the cowel without a face became something to aim for. And I then thought about what if the cowel was taken off and it was a skull. which became mortis once I had redraw the character.

Firing was something that I wanted, and a fireball was originally used - but I thought that was a bit derivative, so again the skull became your weapon.

at this stage the general concept - running around, firing and collecting runes was there. I needed some way to bring them all together and I thought Day of the dead... Pinata.... Skulls. and suddenly it all tied up.

Next up was to find a mexican day of the dead style. and Googling I saw a poster for Viva Mexico. and I thought 'ooh cool'... Viva.. viva, Viva Mortis. live death and it all fit together.

Now I had a concept, a general game and an overriding theme...

Thanks for the insights..

Render2Texture is what I would use too when it comes to proper virtual resolutions. For now there is no proper r2t which works with vanilla and NG. I already nagged (our fellow forum user) col to update his great render2texture code to work with NG too (so sdl.glmax2D and sdl.gl2sdlmax2D). Why is that needed? BlitzMax contains "VirtualResolution" but while this works fine for DrawImage and DrawRect() commands it does ignore the scaling for simple "Plot()", "DrawLine()" and "DrawOval()". This is why for GenusPrime all stars are no simple "pixels" but "DrawRect(x,y,1,1)" commands.

Using shaders is also no (trivial) option for NG (now) so the color stuff would need to be handled "in software" here. This is why I wrote some stuff to map any RGB color to a given palette - it uses a cieLab-color-distance which is no strict value but a perceived color distance. I think a similar approach could be of use for you if you eg had a game which allowed multiple palettes so you could switch automatically (of course it might need adjustments here and there). Also this could allow to automate color animation effects - with an NES or SNES Palette (or 256 colors) this might become even more powerful.

@ screens
You name them "pages". Screen transitions and so: reused existing stuff / framework?



[0] Message Index

[#] Next page

Go to full version