View previous topic :: View next topic 
Author 
Message 
n29
Developer
Joined: 13 Sep 2005
Posts: 879

Posted: Wed Feb 03, 2010 8:09 pm Post subject: Development Log  Minimal 3d Software Renderer 


I've been riding the '3d software renderer in 4k of Java bytecode' Dragon again lately and thought I'd put my results in a dev log since software rendering is a generally interesting subject (admit it you geeks! /joke). For reference I previously spent some time on this: http://gamedevelopersrefuge.org/GDR/viewtopic.php?t=218.
When contemplating a 'minimal' 3d software renderer that will fit into 4k of compressed byte code and leave room for something gamelike there's not room for a lot of standard 'techniques'. Ie, ideally, all the logic should fit into one method of one class; initialization, set up, main loop, rendering loop, logic, everything. No extra classes, not even any helpful classes like Matrix4f or Vector3f. The result is code that written in a very low level, non oo manner. Not even procedural since, as mentioned, everything SHOULD fit into one God method. It's kind of Assembler like in fact.
When doing software 3d, the 'one method' doesn't work because: you have to concatenate the model space and view space transformations; the job that's normal done by matrix operations but we don't have matrices. Ie there's a camera with a rotation and translation and each mesh in the game has a local rotation and translation. To draw an object first the mesh is rotated then translated, then the camera translation and rotation are applied. Repeating the same code twice for this case is simply too expensive in bytes that end up in the final executable. It would be nice to have methods translate and rotate that take mesh and apply the transformation like:
Code: 
mesh = rotate(cameraRot, (translate(cameraTrans, translate(meshTrans, (rotate(meshRot, mesh)))))

A 'mesh' in this case isn't entirely accurate. The question of how represent 3d data is important. As stated, the goal is not introduce extra classes, so the solution is to represent data as arrays. Here the tris are 3 indexes into the vertex array for each corner of a triangle:
Code: 
double [][] verts = {{x1, y1, z1}, {x2, y2, z2}, ...}
double [][] tris = {{v1, v2, v3},{v4, v5, v6}, ...}

In the case of rotation and translation we are only concerned with the vertexes. Below are the two ESSENTIAL methods I ended up. Simply apply the rotations in the most basic, least code way:
Code: 
// a = {rotation X, rotation Y, rotation Z}
// v = {x, y, z}
public double[] rotXYZ(double[] a, double[] v) {
double[] v1 = new double[] {0, 0, 0};
double[] v2 = new double[] {0, 0, 0};
//rotX
double cos = (double)Math.cos(a[0] * .01745329f);
double sin = (double)Math.sin(a[0] * .01745329f);
v1[0] = v[0];
v1[1] = cos*v[1]  sin*v[2];
v1[2] = sin*v[1] + cos*v[2];
// rotY
cos = (double)Math.cos(a[1] * .01745329f);
sin = (double)Math.sin(a[1] * .01745329f);
v2[0] = cos*v1[0] + sin*v1[2];
v2[1] = v1[1];
v2[2] = cos*v1[2]  sin*v1[0];
// rotZ
cos = (double)Math.cos(a[2] * .01745329f);
sin = (double)Math.sin(a[2] * .01745329f);
v1[0] = cos*v2[0]  sin*v2[1];
v1[1] = sin*v2[0] + cos*v2[1];
v1[2] = v2[2];
return v1;
}
public double[] translate(double[] t, double[] v) {
double[] v1 = new double[] {0, 0, 0};
v1[0] = v[0] + t[0];
v1[1] = v[1] + t[1];
v1[2] = v[2] + t[2];
return v1;
}

Edit; clarified a couple of points
Edited by n29 on Thu Feb 04, 2010 5:28 am; edited 1 time 




Sirocco
Moderator
Joined: 19 Aug 2005
Posts: 9471
Location: Not Finland

Posted: Thu Feb 04, 2010 4:11 am Post subject: 


I like software rendering. Passing everything off to a GPU is so boring! 




Gil
Developer
Joined: 14 Nov 2005
Posts: 2341
Location: Belgium

Posted: Thu Feb 04, 2010 9:23 am Post subject: 


Fun fact:
I wrote my first software 3D renderer in pascal when I was 15 on a school trip. On paper.
It was slow as hell obviously, but I'm pretty sure I get ultimate geek points for it. It came to me when playing Goldeneye. I never heard of a transformation matrix, it just dawned to me looking at the game, based on the trigonometry we had learned in school the previous year.
I also ended up drunk with a girl in my bed on that school trip, which probably negates the geek points though. _________________ PoV: I had to wear pants today. Fo shame! 




n29
Developer
Joined: 13 Sep 2005
Posts: 879

Posted: Sat Feb 06, 2010 6:13 am Post subject: 


I forgot to mention that this log will be about polygon rendering; there are of course other techniques for doing 3d; ray casting (ala Doom) and voxels. I haven't explored those yet.
There seems to be two generally accepted methods for rendering polygons, and a third which is cheating:
1. Most windowing systems provide access to a graphics context associated with a window and provide a drawPolygon method. In Java this is the Graphics.drawPolygon() method. Using this method to draw 3d graphics has one major drawback; no zbuffer so the 'painters algorithm' must be used ie sort the polygons from back to front and draw the farthest away first. This technique also requires special care when constructing meshes to avoid/minimize 'z fighting'.
2. The halfspace method. Find the minimum bounding rectangle for a triangle and for each point in the rectangle, check if the point is alos within the triangle by calculating the dot product with each edge.
3. Scan line rasterizing. This the one you'll read about the most;decompose the triangle into a list of line segments and draw the line segments.
I'll talk about all these and how they fair when compressed into 4k of bytecode. 




Bean
Admin
Joined: 20 Aug 2005
Posts: 3780

Posted: Sat Feb 06, 2010 3:12 pm Post subject: 


hehe reminds me of old SFB2: Vector Warriors
Wasn't true 3D since there was no rotation but for BASIC is was one of the cooler things I've done.
It'll be great when we move away from current 3D rendering.
Bean _________________ Kevin Reems  Nuclear Playground  Solid Driver 




sam
Moderator
Joined: 22 Aug 2005
Posts: 255
Location: The high seas

Posted: Sun Feb 07, 2010 3:20 pm Post subject: 


yeah that game was amazing, i think you released an sdk for it in some form as well which i had some fun with at the time
reminds me of an awesome racing game that was made in basic as well which had a good 3d engine, can't remember its name though 




Hard Rock
Contributor
Joined: 31 Aug 2005
Posts: 238





Bean
Admin
Joined: 20 Aug 2005
Posts: 3780

Posted: Sun Feb 07, 2010 10:41 pm Post subject: 


Yeah there was an SDK but to my knowledge no one ever made a game with it.
Never saw that Groov Buggies. Looks funky :)
edit: Just tried it out. It's pretty impressive for BASIC. And unlike my game it's true 3D. Runs slow though. Still quite an accomplishment.
Bean _________________ Kevin Reems  Nuclear Playground  Solid Driver 




syn9ne
Contributor
Joined: 09 Jan 2010
Posts: 375

Posted: Wed Feb 10, 2010 9:33 am Post subject: 


Stick fighters brawl was so bad ass at the time. ive considered doing something like that many times. even a couple months ago >) _________________ The Hideout Games :: Pinterest :: YouTube :: Bandcamp 




sam
Moderator
Joined: 22 Aug 2005
Posts: 255
Location: The high seas

Posted: Wed Feb 10, 2010 2:23 pm Post subject: 


groov buggies was totally the one, i spent ages playing that game. really well done. 




n29
Developer
Joined: 13 Sep 2005
Posts: 879

Posted: Sun Feb 28, 2010 10:31 am Post subject: 


Above I have the mesh data as arrays of longs (verts) and ints (tri indices). This is just about the worst thing you can do in terms of Java bytecode. The compiler implements a literal array declaration like:
Code:  double [][] verts = {{x1, y1, z1}, {x2, y2, z2}, ...} 
as a call to new an array, then load every value into the array individually. So that's a push and and a store instruction for every value. That's alot of bytes in the compiled class file. A much better solution is encode data into a character string, store it in a file, load, and extract data from the characters. Considering this, I started planning how to store the most data in the minimal amount of bytes.
If I only allow each dimension of a vertex to range from 0..15 whole numbers then I can fit a vertex into 12 bits and if the number of vertexes is constrained to 255, I can make each point of the triangle index 8 bits. Using a color table, I can set the color of each triangle in 4 bits for 16 colors. So given this scheme, I can fit the data for one triangle into at most 8 bytes. This will average out to far less since many triangles will share vertices.
For simplicity, I've always planned on using flat shading. To calculate the intensity of the face color, you take the dot product of the face normal with the light source. The face normal can be calculated from the face vertexes in view space, or calculated once in model space and stored then rotated with the face into view space. In the 4k star wars project I calculated face normals on the fly and this worked adequately. Another solution is store face normals (or normals in general) as indexes into a normal table. I didn't find anything online about how to do this. But after thinking about if for a while, it seems that visually, for a normal table you'd have n normals as arrows pointing out of a sphere. However many entries are how many arrows there are sticking out of the sphere. Then the question is how to calculate n evenly distributed points on a sphere. I don't know how to do this yet. Using this approach, I could fit a normal into 4 bits for 16 indexex or 8 bits for 255 indexes. 16 doesn't seem like a lot, so I think a byte sized index for each face normal for each triangle would be the way to go. So we have:
Code: 
vertex:
1.5 bytes
triangle:
3 bytes vertex index
4 bits color
1 byte face normal table index

So that's at most 9 bytes per triangle. If the data is fairly regular, and it should be then it should compress very well. Using the literal array approach above it's something horrendous, like 20 or more bytes per triangle, at least. Also, if normals are cheap all of a sudden using the normal table approach, why not implement smooth/gauraud shading?
Edit; added bits per trianlge incorrectly 




n29
Developer
Joined: 13 Sep 2005
Posts: 879

Posted: Mon Mar 01, 2010 6:36 pm Post subject: 


In the 4k star wars project I built meshes by stretching and rotating 1x1x1 cubes. With this 'new' scheme of encoding meshes in a byte/char stream, I need a modeller that will let me constrain vertexes to whole numbers and output a format I can work with (a post processing step to massage the data into a form I can use). So I started going back thru some blender tutorials. Blender will let me 'snap to' vertexes to a grid and it will export to the .obj format which is plain text. Here is a minimal vertex/triangle attempt at an xwing:
This is blender running in a virtual box win xp instance on my ubuntu laptop (9,04). Unfortunately this version of ubuntu has gfx driver issues that keep blender from working properly without upgrading a bunch of stuff. I'm just going to wait for 10.04 before I upgrade again. 




xearthianx
Developer
Joined: 28 Sep 2006
Posts: 771
Location: USA! USA!

Posted: Thu Mar 18, 2010 12:00 am Post subject: 


n29 wrote:  Another solution is store face normals (or normals in general) as indexes into a normal table. I didn't find anything online about how to do this. But after thinking about if for a while, it seems that visually, for a normal table you'd have n normals as arrows pointing out of a sphere. However many entries are how many arrows there are sticking out of the sphere. Then the question is how to calculate n evenly distributed points on a sphere. I don't know how to do this yet. 
You would be able to do the normal table for a circle or thin ring, yes? Just rotate the ring about some axis, and you just built yourself a sphere. Calculate the normal vectors you need along the way, and ignore the redundant ones at the "poles". _________________ Ionoclast Laboratories  Scientia et Dominatia! 




syn9ne
Contributor
Joined: 09 Jan 2010
Posts: 375

Posted: Thu Mar 18, 2010 6:05 am Post subject: 


i'd go 8 bits for 256 indexes, and id use an ico sphere in blender to get the evenly spaced vertices. then either load their normals in obj format, or load their verticies and normalize the mesh. once the mesh is normalized all the verticies become the normals you need.
if you dont want to load an external file, or dont have room to store the precalculated normals in code, you can just use a formula for an ico sphere and generate them at runtime. basically, you'd be generating the 12 vertices of a 20 sided die, then subdividing each equilateral triangle until you have as many vertices as you want, normalize the mesh, and viola!
also, not sure if it'd help, but you might be able to reduce your rotation code by using axis angle rotation. which will also prevent gimbal lock, which your current method can cause.
http://en.wikipedia.org/wiki/Icosphere
http://en.wikipedia.org/wiki/Rodrigues%27_rotation_formula _________________ The Hideout Games :: Pinterest :: YouTube :: Bandcamp 




Sirocco
Moderator
Joined: 19 Aug 2005
Posts: 9471
Location: Not Finland





n29
Developer
Joined: 13 Sep 2005
Posts: 879

Posted: Tue Oct 26, 2010 7:49 pm Post subject: 


Winter means the Java4k is coming around again so that means I start thinking about 4k software rendering (again).
Going back to when I stopped last time, I was reimplementing the 'half space' renderer I had used in the 4k Star Wars prototype. Only that time I was seeing a weird artifact in the render. You can see in the image below the line where the two cubes intersect it should be straight but there's a slight curve. I still cannot figure out why this is. I think it may have something to do with either my zbuffer precision, how I'm calculating the z values, or perspective projection. It's really annoying.
Code: 
BufferedImage bb;
void render(double[] v1, double[] v2, double[] v3, int color) {
// clip to z
if(v1[2] < 0  v2[2] <0  v3[2] < 0) return;
// project to screen 400x400
double y1 = (200d*v1[1]/v1[2])+200d;
double y2 = (200d*v2[1]/v2[2])+200d;
double y3 = (200d*v3[1]/v3[2])+200d;
double x1 = (200d*v1[0]/v1[2])+200d;
double x2 = (200d*v2[0]/v2[2])+200d;
double x3 = (200d*v3[0]/v3[2])+200d;
double z1 = v1[2];
double z2 = v2[2];
double z3 = v3[2];
// backface culling
if( (x1x2)*(y3y2)  (y1y2)*(x3x2) <= 0) return;
// the plane equation for this triangle: ax + by + cz = d
double a = (y2  y1)*(z3  z1)  (y3  y1)*(z2  z1);
double b = (x2  x1)*(z3  z1)  (x3  x1)*(z2  z1);
double c = (x2  x1)*(y3  y1)  (x3  x1)*(y2  y1);
double D = x1*a + y1*b  z1*c;
// bounding rectangle
int minx = (int)Math.max(0, Math.min(x1,Math.min(x2, x3)));
int maxx = (int)Math.min(400,Math.max(x1,Math.max(x2, x3)));
int miny = (int)Math.max(0, Math.min(y1,Math.min(y2, y3)));
int maxy = (int)Math.min(400,Math.max(y1,Math.max(y2, y3)));
// scan through bounding rectangle
for(int y = miny; y < maxy; y++) {
for(int x = minx; x < maxx; x++) {
// When all halfspace functions positive, pixel is in triangle
if((x1  x2) * (y  y1)  (y1  y2) * (x  x1) > 0 &&
(x2  x3) * (y  y2)  (y2  y3) * (x  x2) > 0 &&
(x3  x1) * (y  y3)  (y3  y1) * (x  x3) > 0) {
// clamp to visible area of display
if(x > 0 && x < 400 && y > 0 && y < 400) {
double z = (D  a*x + b*y)/c;
int i = 400*y + x;
if(z < zbuffer[i]) {
bb.setRGB(x, y, color);
zbuffer[i] = z;
}
}
}
}
}

This algorithm is explained in this link which I'm sure I've posted here before:
http://www.devmaster.net/codespotlight/show.php?id=17
So at this point I'm going to skip it and go on to triangle rasterization.
Edit: replaced code with more readable version 




