Isometric Dreams.

This project is a dream come true for me, since I always wondered how those classic 2d videogames (such as Super Mario Bros. and Metroid) would look like if you could rotate the viewer's point and see blocks from the side. I actually believed naively before the Nintendo 64 was unveilled that the hardware would have been able to pull off a 3d version of Super Mario Bros. So I was for sure decieved by the very rare 3d '?' blocks in Mario 64. I concluded at the time that the 3d hardware in the n64 could (probably) not display hundreds of textured/lighted cubes and shapes at close distance and rotate them smoothly. That and other reasons (the 3d looks of a rotated 2d side-view videogame looks weird) prevented nintendo from releasing such a game. The GameCube has the power to render in realtime a 3d version of super mario 1/2/3, Zelda and Metroid, lets hope Nintendo will be inspired by my projects ;)

 

The DIY (do it yourself) approach.

But as for now, since my dream was not provided by Nintendo I decided to attempt to do it myself and my way. I tried first to do at hand the conversion of 2d 16x16 graphic tiles into isometric °30 view cubes or shapes using drawing tools and distortion tools. It was not that easy and would have require demanding artwork design time to keep the look of the original tiles. For example there are 282 tiles in Metroid, (well actually some repeat) it would have been time consumming (I'm kinda perfectionist at these things). I then tried doing it with 3d applications but had hard time desiging objects that would be close enough in my taste to the original 8-bit look. Textures and objects have to precisely align when building the levels in the game, so again it was time consuming to convert the 282 tiles.

 

Birth of "The Cubic Sculptor".

So I developed my own tool to "sculpt" the graphic tiles in a (32x32x32) 3d virtual cubic grid. This way I can keep the original look and pixels of the game's tile. I called the tool "The Cubic Sculptor". The process I use transforms pixels into "voxels" using user-designed depth maps. What I mean by "user-designed" is that I had to design depth maps for each tile in the game. The depth maps are 16x16 like the original tile, but the image uses grayscale (and bluescale for some reason in TCS), each pixel value represents a depth. The darker it is, the closer the sculpting depth is from the side of the virtual grid, the lighter it is, the closer the voxel is to the center.

 

Voxels? What the @#!%& are voxels?

A voxel is the 3d equivalent of the pixel, being a color/light element on a 2d grid and a voxel being an element on a 3d grid. Voxels are very rarely used in the computer generated 3d graphics in movies/tv, those are instead rendered from polygons, and sometimes polygons are used to simulate voxel-like effects for grass and terrain, but people these days are confusing these voxel like effects with true voxel rendering. For example, some games use "VoxelScape" which is drawn using vertical lines and is limited to terrain rendering, these are not really voxels. The only commercial game using voxels that I know is Worms 3d. They call it "poxels" since they are rendered with polygons, but they are voxels at their basis. 3d polygon based rendering is to voxels based rendering what using vectors images (like in illustrator or in flash) is to using bitmaps (such as gif/jpeg/photoshop) in 2d images. The main reason why voxels are rarely used is that they are very mathematicly intensive. With polygons, 3x3 (3 x (x,y,z)) values are needed to define a triangle polygon that can easily fill a part of the screen with a high resolution picture. A voxel only use 1 x (x,y,z) value but there is one voxel for each point in a given triangle, and that value increases with resolution. A cube that could only use 6 or 12 polygons can generate in a very "low" 16x16x16 resolution a staggering 1536 (16*16*6) individual voxels, and its exponential as you increase the resolution or complexity of the object(s), with a resolution of 128x128x128, a cube generates up to 98304 voxels (actually no more than half of those have to be displayed, the others are hidden). Even if often only some percentage of the voxel are visible or lighted up, you can understand that its will always be way much faster to render large flat surfaces using polygons. Voxels still has some advantages over polygons, many that suits my project and future projects. First it generates true bumps, as opposed to the pseudo-bumpmapping used in polygon based rendrerers including Ray-Tracing. To illustrate, imagine you have a polygon rendered cube and put on it a very grainy rock texture, notice that even if moving the light affects in a very smooth way the textures on the side of the cube, giving a convincing impression of some bumps, you still see the vertices and edges of the cube very sharply. If you rotate the cube so some sides are perpendicular to the viewer it looks completly flat; you lose the bump mapping effect. With voxels, bumps are part of the stored information, since the voxels define the 3d position of each point on a surface. As another advantage to my project is that things are way more easy to align than in a standard 3d polygon program and to ensure that there is no bad distortions of the original tile information. So since I didn't found a suitable voxel based program (those are mostly used on custom medical-research computer), I decided to create one myself, again to ensure the integrity of the original graphics. I used Macromedia Director, wich I'm getting really good at over the years (I started with version 2! :)) I spent in total hundreds of hours designing my program over a period of about 2 years. I learned many things on the way of doing it, so I could now re-do it from scratch in a few weeks, and actually the program had parts of it redesigned from scratch a couple of time, as I upgraded Director to new versions. I experimented many rendering techniques and had to deal with the slow speed of my old computer and also had to deal with many Director limitations/bugs. With Director 8 and up, imaging lingo gave me new possibilities, but some redesign to do.

 

How it Works.

The Original Cubic Sculptor rendered in the following way: actually the name is deceiving, it doesn't really sculpt, it create stuff (voxels) from out of nowhere. It was faster to add stuff to the 3d grid than to remove stuff because most objects are much smaller than the 3d grid. So the way it works is that a routine scans the depth map of a tile for each side of the 3d cube grid, and places voxels at the corresponding depth perpendicular to the side (a dark value places a voxel closer to the current side) inside the 32x32x32 cubic array virtual grid. Optionally it will continue to put voxels on the same axis, repeating and incrementing the depth for a specific number of time. Each voxel turned on in the 3d grid is assigned a color value wich is taken from the original 16*16 graphic tile. A voxel is not set if in the depth map the pixel is a color (as oposed to gray and blue scale). A value assigned to the whole cube object is the "fill depth offset" which offsets the values in the depth map, bringing each "sculpted" side closer or farther from the center of the 3d grid.

The user can turn on and off the sculpting process on each of the 6 sides, and optionnaly only paint the texture on the corresponding side, after the object is first sculpted, without sculpting. All this data except the resulting 3d grid data content is saved for each cubic object (only in memory in the web version). The actual rendering onscreen of those voxels is what evolved and changed the most over the course of my project. At first it was done plotting sequencialy 4x4 pixels micro-cubes using a simple 3d isometric orthographic perspective. Draw one cube, go 2 pixel left, 1 pixel down etc... By painting a micro-cube filled with the corresponding color and overlaying a "shadowing" micro-cube, I could make the 3 visible sides of the rendered cubic object individually darker or lighter (but it was more or less fixed value, not possible in real time like in the current version). The problem it generated was that for even simple 45 angles it created a visible staircase effect that was amplified by the shadowing process. I tried to get rid of the artifact by blurring the images, but it was not practical (there is no built-in scripteable blurring effect in Director/Shockwave).

 

Welcoming Marching Cubes.

The solution I found was to use "Marching Cubes®" to render the objects. Marching cubes are usually used to transform voxel objects into polygons objects. I dont use them as such in my project, I use then to smooth the edges and also to get a lighting map of the object. This marching cube algorythm scans a 3d grid and finds for each voxel wich of the 256 common marching cubes fills the space in the best way according to the presence or absence of its surrounding neighbors, for example, if the voxel is part of a staircase-like formation, the chosen marching cube will have a 45° slant. Think of it as using those special slanted lego bricks to fill the gaps and make a slanted 45° house roof. You can easily find the corresponding Marching Cube number only by testing 8 neighboors at each of the 8 corners of the voxel wich incidently gives a 256 values 8-bit binary number, then used to choose the Marching Cube.

What I've done is to pre-render those 256 Marching Cubes in the following way: I designed another program that calculated those cubes using polygons, my program then generated 3DMF (QuickDraw 3D) files for each Marching Cube. Then using a 3DMF viewer I rendered the cubes at a 30°,45° angle in isometric perspective.

 

Light Encoding.

I thought about lighting the x, y and z axis using red, green and blue directional lights so that a side directly facing the x direction would be solid blue, in the y direction green, and a side pointing to the camera would appear yellow (being half-way between x and y, and red+green=yellow). So the lighting angles for each faces of each Marching Cube are encoded in the color information. I then reduced the cube images to a more useable size of about 10 pixel of width. The Cubic Sculptor now renders using those color-coded marching cubes to get a lighting map that could be used in real-time to light the cubic objects in a game. After the light map is rendered, the image is reduced (again) to produce a 50x50 image, wich is then color-separated, reducing the image doesnt affect negatively the encoded lighting information since these are 3 "parallel" colors, actually the anti-aliasing only smooths things out. The result for the lighting map is 3 50x50 greyscale images where each point indicates in its darkness how much the point is facing this particular , one for each axis. These 3 maps can be overlayed in imaging lingo right on top of the color map using some special "inks" to light interactively the cubic object.

 

Color Encoding and Palette swapping.

Now about the color map: Originaly in the project, the whole cubic objects images were prerendered with a given palette and stored in the project files as they would look like (colored and lighted). In a NES, a 16x16 tile can be assigned one of the four active 4 color palette. In Metroid each of the 5 areas have 4 or 8 palettes available. Prerendering the cubic objects in each of those palettes would have multiplied the size of the project files by a factor of 4 to 8. Since the resulting rendered cube object contains way more than the original 4 colors (lighting and anti-aliasing produces intermediate values, best rendered in 32-bits), I could not simply do palette switching afterward on a rendered cube object. The solution was to use the same primary color coding used in the lighting map. In Metroid, one of the color in the 4 color palette is always black, so I assigned black to "no color" and the 3 other colors to red, green blue. The Color map is rendered much the same way as the lighting map, except that the marching cubes are filled with solid red,green or blue, no shading on them. The cube object's mask helps differenciate between "no color as in transparent" and "no color as in black".

 

Resulting Data.

So the resulting greyscale image combines: 2 masks (one is redundant), 3 lighting masks for x,y and z, and 3 color maps for R, G & B. This 8-bit 100x200 greyscale image takes only twice the size of a single 50x50 32 bit color image (and I would have needed 4 or 8 of those instead of one). The Maps are pre-rendered for the game and stored in the files. So there you have it, with this the game can render the cubix with different lighting and palettes, changing them at will. For speed reasons Metroid Cubed doesnt do interactive lighting (yet). To speed up real-time scrolling, each time you enter a room, it prerenders each 16x15 tiles "screens" that are then combined on stage to show the whole room and it patches the parts that needs to be changed (like when you blow up a block or a block reappears). Real-time lighting will probably be used interactively in an update to my 3d zelda project, since zelda is made up of non-scrolling 16x10 tile screens. Still, this feature was usefull in designing Metroid Cubed, since I can change the lighting without rerendering all the cube objects.

 

Rotating the future?

The Cubic sculptor has a great potential for expansion I think. Just after the sculpting. I can extract 32x32 depth maps of the resulting sculpted objects from any of the 6 sides from the 3d grid, so a game could apply interactive shadows and reflections that precisely fits the bumps and shapes. Also I have the project of going back to render all cubes again but rotating them by 15° each time, giving 24 different angles. I could then incorporate them in TCS and have pre-renders of a number of angles. It means I could display a user rotateable playfield that could range from 0 to 45°. At 0° the blocks would face the viewer just like the original 2d version, but with an isometric parallel perpective of the top of each block extending behind the top of the facing side. This where I think TCS will shine, at that angle, the facing sides will look almost exactly like the original graphics but with some added depth, and by rotating the playfield you will be able to see the transition between the original look and my 45° twist. For now I decided to use a fixed perspective in Metroid Cubed, for memory/disk space considerations as each angle multiplies the space used by prerendered cubic object. Again, Zelda could be a great candidate for this because it has less than 64 tiles for the overworld (See the Zelda Cubed demo). I'm actually somehow proud of TCS, designing tiles requires only some little pixel painting on a 16x16 or 32x32 grid, some tweeking with fill depth and offsets and it produces small but detailed 3d objects that can be lit interactively. It could have uses in designing many other types of games, for example a really cool Marble Madness game is rotating somewhere in my head (literaly). It could be used on a RC-cars game with uneven terrain. RPG (role playing games) graphics could also be created in a fast way, but ehm I don't like RPGs, I call them "menu games". TCS could be great also for "go chop the wood" games ;) (Starcraft and some others use prerendered voxels sprites I think).

 

Metroid Cubed is da thang.

Metroid Cubed uses TCS rendered cubic objects to recreate the Original Metroid Experience, but in 3d isometric view. It tries to do it in the best aproximation possible of what the game blocks would look like if we could see the top and side. The engine that enables Samus Aran (Metroid's heroine) to move and interact with the game was designed by myself, it was at first designed to do a 2d reproduction of the original game in a personnal project. Adapting it to the isometric perspective was easy once all of Samus's sprites were converted to 3d using TCS. One thing that had to be added was selective masking, wich draw over samus only the blocks that are in front of her from our view, that is the blocks in from of her view and just on top of her, the blocks behind her. The ones she is standing on are part of the background and appear behind her sprite. Some blocks in front are painted using the "lightest ink" wich means you can still see samus behind darker parts. Since I'm also working on a Metroid level editor called "Metroid ROMedit", I know how to extract the room level data. Metroid Cubed though doesn't contain the ROM data. But anyway I hope you are a loyal nintendo fan who at least owned once the Original Metroid or Metroid Prime wich contains the game. The bombs are useable, you can even do the bomb jump! You can blow up or shoot breakeable bricks and they will reapear some time afterward like in the original game. There are also blocks that you can go thru. I think I covered most of the Metroid engine for Samus and background interaction, the sprite animation is close to the original too. I implemented all guns including the secret one (icy wavebeam particle), you can switch them in game. I intend to tweak some cube objects , and Samus sprites could have some corrections (She does have a large helmet in the real game). Another thing to mention, I did a map that defines each room as interconnected screens (a screen is 16x15 tiles and is mapped in the game's 32x32 world map) so that the program only have to render a bunch of screens when you enter a room. So when you play the game you are in a room that can have for example a width of 10 screen. It has many advantages like not letting you see the rooms under or on the other side of a door. It is also usefull to contain enemies and to easily track their position when they leave the screen, resetting them when you enter a room. At one point I added more depth to rooms, in that case a curved 3d background.

Rotating the future 2? (Addendum)

Metroid Cubed evolved since I wrote what is up here, but is still not finished. I also learned many things since and I should rewrite parts of it, and some things don't even apply anymore. Since I'm learning Shockwave 3d, I'm thinking about using the 3d engine to render blocks, creating custom meshes using marching cubes to convert the voxels into polygons. The only difference from my current approach is that I will be using a real 3d engine instead of painting prerendered marching cube bitmaps. That will enable easy rendering at any angle. These blocks couldn't be used in a fully polygonal game though, since most are constitued on average of 5000 polygons and a single Metroid screen can contain up to 240 of them. So the blocks will be pre-rendered in bitmaps just as TCS do now. I will also keep the Light-to-Color encoding and the dynamic palette. In parallel I found a way to display live voxels quickly and I implemented live rotation in the "Metroid Rotation Demo". Unlike Metroid Cubed though the demo doesn't have lighting effects and for now, no enemies. The lack of light is actually a good thing in a way since it keeps the original colors from the game and it can morph from the original 2d flat point of view to a 3d rotating view smoothly. So the live voxel process is really simple, I added a feature in TCS to export the 32x32x32 voxel data of each block into bitmaps. The bitmaps contain each slice of the object and in Metroid 99% of objects only use 21 of the 32 possible slices. So the demo simply renders 21 versions of the room, each of them representing a slice of the room. The 256x240x21 voxel field is then rendered by stacking up parts of these 21 bitmaps. The main problem with this technique is that if the slices are anything close to being perpendicular to the viewer, they become glitchy and eventually invisible at 90 degrees. The live voxel method is much slower than in Metroid Cubed so for now the rotation demo is just that... a demo.

 

To be continued...

Back to the Metroid Cubed Main Page