GDR Forum Index
Podcast Podcast
Dev Dev Logs
Search Search
RSS RSS
Register Register
Log in Log in
Reply to topic GDR Forum Index -> Game Developer's Refuge -> Development Log - Mutiny Engine Page Previous  1, 2, 3 ... , 13, 14, 15  Next
View previous topic :: View next topic  
Author Message
PoV
Moderator

Joined: 21 Aug 2005
Posts: 10899
Location: Canadia
PostPosted: Thu Apr 11, 2013 1:01 am    Post subject: Reply with quote

Are the PNG outputs actually wrong, or just bigger than they need to be? I can't imagine a BMP being smaller than a PNG even with alpha.

STB_Image and STB_Image_write are one file libraries for image loading and saving. Not full featured, but enough. You still need to extract the data somehow, but you have the choice of RGB and RGBA (or even 1 channel luma (gray), and 2 channel luma+alpha).

http://nothings.org
_________________
Mike Kasprzak
'eh whatever. I used to make AAA and Indie games | Ludum Dare | Blog | Tweetar
View user's profile Send private message
Sirocco
Moderator

Joined: 19 Aug 2005
Posts: 9459
Location: Not Finland
PostPosted: Thu Apr 11, 2013 4:53 am    Post subject: Reply with quote

Quote:

Video cards have extra transistors who's sole purpose is to decode compressed texture formats at practically no additional cost


Good to know. Last I checked, GPUs only worked with raw textures... but this was like 2008 or so.



Quote:

Then I tried PNG, but as previously stated, DX likes to have alpha channels where they're not needed. The PNG output ends up mixing alpha with RGB which creates a mess (like how Windows Photo Viewer fucks this up).
So here I am in 2013 outputting BMP files.


I ran into some shit like that with Moai when I implemented alpha support.

Some implementations require the alpha channel to be full (that is, 255) to have a fully opaque picture. Others will automatically assume that a pic with an empty channel (that is, all values are 0) is fully opaque. When I load a pic on Moai, it makes its best guess as to what it got.

PNG doesn't require an alpha channel, either -- it can be RGB. But you have to read the file yourself and check the header to see what it has unless your lib exposes that information.

PNG is nice because it supports RGBA, but it can be sorta slow on the encode side. I ended up storing workspace files (which the user never sees) as Targa. The files were huge, but it was very, very fast. Speed was more important for temporary storage.
_________________
NoOP / Reyn Time -- The $ is screwing everyone these days. (0xDB)
View user's profile Send private message Visit poster's website
PoV
Moderator

Joined: 21 Aug 2005
Posts: 10899
Location: Canadia
PostPosted: Thu Apr 11, 2013 6:20 am    Post subject: Reply with quote

Sirocco wrote:
Good to know. Last I checked, GPUs only worked with raw textures... but this was like 2008 or so.

Texture compression has been around for a long time. Pretty much every 3D game console, even portable ones support some form of it. Even the Dreamcast supports it. PS1 did paletted textures, which ultimately is the same idea (reducing the size of data). :)
_________________
Mike Kasprzak
'eh whatever. I used to make AAA and Indie games | Ludum Dare | Blog | Tweetar
View user's profile Send private message
Bean
Admin

Joined: 20 Aug 2005
Posts: 3776

PostPosted: Thu Apr 11, 2013 9:07 am    Post subject: Reply with quote

The BMP files come out fine (since they don't support Alpha it's hard to fuck this up. But they're huge.

JPG works (again no Alpha) but the quality is shit.

PNG as a format is great. I've been using it for 1, 3 and 4 channel textures since the beginning.
The issue is that to save a texture I have to first create a texture. This texture needs to be RGBA (apparently they did add an RGB format in DX11 but I'm using DX10). But there's no way to tell the FileSaving function to just ignore the Alpha channel.

Unlike the rest of the game, I can't use a compressed format for this because the FileSaving function can't do the conversion to PNG.

Perhaps I can write 1's or 0's to the texture's Alpha channel before saving it..


-Bean
_________________
Kevin Reems | Nuclear Playground | Solid Driver
View user's profile Send private message Visit poster's website
Sirocco
Moderator

Joined: 19 Aug 2005
Posts: 9459
Location: Not Finland
PostPosted: Thu Apr 11, 2013 12:45 pm    Post subject: Reply with quote

Quote:

Perhaps I can write 1's or 0's to the texture's Alpha channel before saving it..


So long as the data structure isn't Byzantine, that's an easy fix. I say do it.
_________________
NoOP / Reyn Time -- The $ is screwing everyone these days. (0xDB)
View user's profile Send private message Visit poster's website
Bean
Admin

Joined: 20 Aug 2005
Posts: 3776

PostPosted: Thu Apr 11, 2013 6:37 pm    Post subject: Reply with quote

I got lazy and switched to JPEG despite the artifacts. I instead spent time coming up with a way to name the files.

I didn't want to cause a pause in the game by digging through the directory and checking all the existing files for type and numbers in their names and all that bullshit.

Instead I made a function that generates ID numbers based on time. Which may have uses for other stuff as well. It takes the year, day of the year and the number of seconds since midnight and smashes them all together. There are some tricks to deal with multiple ID's being issued within the same second and mid-night.
The key being it always spits out a unique ID number and those numbers are always in order.

So with that, I can pump out screenshots as fast as I can press the button. These go into a new Screenshots dir in the player's User dir. I don't know if any of you guys have poked around the files but it's possible to have multiple Users with their own prefs, key binds etc. Although at this time there's no UI for it.


Here's a full sized screenshot. It's not as apparent in this one but it shows the JPEG quality.
I'm currently rendering everything with 50% alpha so I can start working on optimizing. There's some serious culling to be done. Hence the square blood and translucency.


-Bean
_________________
Kevin Reems | Nuclear Playground | Solid Driver
View user's profile Send private message Visit poster's website
Bean
Admin

Joined: 20 Aug 2005
Posts: 3776

PostPosted: Sat Apr 13, 2013 8:41 am    Post subject: Reply with quote

Previously, almost everything was drawn from back to front. this was due to a lot of my objects having translucency. So this method insure everything looked correct. The problem with that is it makes little use of the z buffer.



Again, I'm rendering in 50% alpha so it's possible to see what's behind geometry. Here we see I have changed the z-order a bit.



..even more so. Almost nothing is being rendered behind the wall on the right. This culling is being done by the hardware. So the problem still exists that I'm sending that data to the card, but at least it's not wasting additional time by drawing it.

The sorting I'm doing is to draw all solid objects front to back. Then draw blended objects back to front, taking advantage of the z buffer data created by the solid objects.
Obviously there are some tweaks and exceptions. Like particles aren't sorted at all, they're just tossed in at the end. But that's the basic idea.


-Bean
_________________
Kevin Reems | Nuclear Playground | Solid Driver
View user's profile Send private message Visit poster's website
mikedoty
Developer

Joined: 18 Mar 2006
Posts: 1788

PostPosted: Sat Apr 13, 2013 5:05 pm    Post subject: Reply with quote

I guess in theory, you would only draw the closest solid object (because even the alpha objects behind it would be hidden), and then from there you'd draw every alpha object closer than the closest solid object, rendering from far to near. It almost sounds easy on paper... don't want to imagine the work that goes into it right now, though. :)
_________________
The end of the game, yes, is pretty much getting the weapon and killing off the population.
mashup games . com | Finally! - A Lode Runner Story
View user's profile Send private message Visit poster's website AIM Address
Bean
Admin

Joined: 20 Aug 2005
Posts: 3776

PostPosted: Mon Apr 15, 2013 9:13 am    Post subject: Reply with quote

Actually the alpha objects are drawn from far to near even when they're further away then the solid objects.
The z-buffer prevents anything from being drawn that's behind a solid object. This is why the solids are drawn first and from near to far order. The idea is to block unnecessary drawing as early as possible. So if I can get a bunch of big solid objects drawn near the camera right at the beginning, it'll cause a bunch of other stuff to be ignored later.

Next I want to experiment with target=instancing>instancing. At the least it should benefit my particle rendering. At most, I could use it for damn near everything on the screen!


-Bean
_________________
Kevin Reems | Nuclear Playground | Solid Driver
View user's profile Send private message Visit poster's website
Sirocco
Moderator

Joined: 19 Aug 2005
Posts: 9459
Location: Not Finland
PostPosted: Mon Apr 15, 2013 10:00 am    Post subject: Reply with quote

Random tidbit: you can create fancy-looking tree foliage using point sprites. If you do it right, and keep the sprites animated, the effect will be difficult for casual observers to notice. Lotta games have been doing this for far-off trees, and switching to fully constructed trees up close.

Also useful for clumps of grass.
_________________
NoOP / Reyn Time -- The $ is screwing everyone these days. (0xDB)
View user's profile Send private message Visit poster's website
Bean
Admin

Joined: 20 Aug 2005
Posts: 3776

PostPosted: Tue Apr 16, 2013 2:13 am    Post subject: Reply with quote

I ended up putting the instancing stuff on the backburner for the time being. I figured the light model ought to be my priority since it could complicate things with instancing.

I've got a good start too. It looks like I'll be able to double the number of lights per object to four. In addition to those I'll be adding an overhead light as well.
No bump mapping yet but so far I'm averaging 40fps on my 5 year old system which is very promising.

As for distant stuff, I already use low-poly models for those but I think combined with instancing it would speed things up even further.
If anything I want it as an option for my next game which will be another side scrolling shooter (done right). I plan on having a couple bullet hell levels in there and being able to draw fifty billion projectiles at once will benefit greatly!

I have another trick up my sleeve for backgrounds which I believe will be more suited for first person games. Sooner or later I've gotta get Diversion completed..


-Bean
_________________
Kevin Reems | Nuclear Playground | Solid Driver
View user's profile Send private message Visit poster's website
Bean
Admin

Joined: 20 Aug 2005
Posts: 3776

PostPosted: Wed Apr 17, 2013 10:34 pm    Post subject: Reply with quote

Progress continues to be good with the lighting.
Total lights per object came out to:
1 overhead - Like a directional light that's always aimed downward.
1 directional - This can point in any direction. Diversion uses it for the Sun.
4 dynamic - These can be omni or spot lights.

These are all handled in a single pass. If necessary I could trade a dynamic for another directional but with the new overhead light I don't see the need for it.

I've axed the color tables. These were loaded from PNG files and contained the colors used for the day/night cycles and such.
To make life easier on myself I'm now defining a handful of color and interpolating between them. This will also make it easy for future games to modify these colors. i.e. Level 2 is now on a different planet and it's got a green sunset or whatever.

Plus I did a lot of other reorganization. I'm happy with the direction things are going. It shouldn't be too terribly long before I get back to working on the game.


-Bean
_________________
Kevin Reems | Nuclear Playground | Solid Driver
View user's profile Send private message Visit poster's website
Bean
Admin

Joined: 20 Aug 2005
Posts: 3776

PostPosted: Thu Apr 25, 2013 1:02 am    Post subject: Reply with quote



I did an experiment, actually two.
One) Instead of doing my shader calculations in tangent space, I'm now doing them in world space.

If I'm losing you, basically in 3D you're dealing with several coordinate systems. World space being the full world. Then you have object or model space which is in relation to a mesh. And tangent space is in relation to a single poly. I could go into more detail but the point is; when doing comparisons and calculations you generally want the things your comparing to be in the same space.

I was previously using tangent space (and converting a lot of shit to tangent space in my vertex shaders) simply because that's how I learned years ago. By switching to world space I'm converting a lot less shit into another space. More importantly, I don't have to pass these converted things from the vertex shader to the pixel shader. The amount of shit you can pass is limited so this is a good thing. In fact I managed to eliminate all light data from that hand-off which in turn means I can have as many lights as I want. The image above shows 11 on a single quad.

Two) I'm doing almost all calculations on the pixel shader. This is a double edged sword. Doing calculations per pixel instead of per vertex is obviously a lot more work on the GPU. On the flip side, you get more accurate results. Currently I'm still getting about 40fps which is all well and good but I haven't started getting crazy with the shader effects yet so I'm planning to move some stuff back to the vertex shader. The trick will be figuring out what should and should not move. My next step is to get better educated in that regard..


edit: Also, I will undoubtedly end up limiting the number of lights (per object) to something sane in the name of speed.

-Bean
_________________
Kevin Reems | Nuclear Playground | Solid Driver
View user's profile Send private message Visit poster's website
Gil
Developer

Joined: 14 Nov 2005
Posts: 2341
Location: Belgium
PostPosted: Thu Apr 25, 2013 5:00 am    Post subject: Reply with quote

Bean wrote:
edit: Also, I will undoubtedly end up limiting the number of lights (per object) to something sane in the name of speed.
Wouldn't it be better to just limit the amount of light sources in the scene to something sane instead of imposing a limit? I imagine it would be fun to have the option there, but restricted by careful level design. Then again, I don't know if that's feasible within your game (you have destructible terrain, right? Dynamic lights too?)
_________________
PoV: I had to wear pants today. Fo shame!
View user's profile Send private message Visit poster's website
Bean
Admin

Joined: 20 Aug 2005
Posts: 3776

PostPosted: Thu Apr 25, 2013 7:33 pm    Post subject: Reply with quote

Lighting in Mutiny is handled per object. So there's no limit to the number of lights in the whole scene. There could be 500 lights but if object x is only in range of 3 of them, the rest get ignored before the renderer even touches anything.
Furthermore, lights have a priority level. So for example if there's a limit of 3 lights per object and someone fires a pistol. The pistol flash is a low priority light which is only seen for one frame. So the engine will likely ditch that in favor of more significant lights.

If it will give a boost in speed, Mutiny does a pretty good job with a small number of lights per object.
That said, the most important thing I learned while studying photography is the importance of light. The lighting can make all the difference in beauty and shit. Up till now I've been keeping the dynamics lights in the level(s) away from each other so that there's only ever one light in the area. This frees up others for special effects.

I would like to be able to throw more lights around and improve my visuals. I've already decided not to make any changes to the (low poly) models in Diversion. But it'd be nice to make it better looking without a significant amount of work.


-Bean
_________________
Kevin Reems | Nuclear Playground | Solid Driver
View user's profile Send private message Visit poster's website
Bean
Admin

Joined: 20 Aug 2005
Posts: 3776

PostPosted: Mon Apr 29, 2013 12:09 pm    Post subject: Reply with quote

Woo! Got bump mapping back in! Things are lookin good. Next will be cube mapping. From what I understand this has been streamlined in DX10 so we shall see how that turns out. Then I'll have teh shiney!

Screenshots when I'm not posting from a phone..

-Bean
_________________
Kevin Reems | Nuclear Playground | Solid Driver
View user's profile Send private message Visit poster's website
Bean
Admin

Joined: 20 Aug 2005
Posts: 3776

PostPosted: Fri May 03, 2013 1:01 am    Post subject: Reply with quote

It brings me great joy to report that Diversion is now averaging over 100FPS on my 5 year old machine! That's with no reduction in graphics.

To quote myself:
Bean wrote:
This culling is being done by the hardware. So the problem still exists that I'm sending that data to the card, but at least it's not wasting additional time by drawing it.


As you may have guessed, I'm no longer sending that data to the card. Mutiny now culls hidden objects on the fly.
Here's a wireframe overlay which shows what's being drawn behind a wall as I move behind it. Every large object in the world can do this now.






Look at my BALLS!


-Bean
_________________
Kevin Reems | Nuclear Playground | Solid Driver
View user's profile Send private message Visit poster's website
Sirocco
Moderator

Joined: 19 Aug 2005
Posts: 9459
Location: Not Finland
PostPosted: Fri May 03, 2013 4:57 am    Post subject: Reply with quote

As I've always suspected, your balls are big and shiny.
_________________
NoOP / Reyn Time -- The $ is screwing everyone these days. (0xDB)
View user's profile Send private message Visit poster's website
Bean
Admin

Joined: 20 Aug 2005
Posts: 3776

PostPosted: Sat May 11, 2013 10:53 am    Post subject: Reply with quote

Mutiny can now load cube maps from DDS files. It can of course render cube maps now as well. Yay!

All that's left is making it so I can render scenes to cube maps. DirectX10 introduced a fancy new way to render all 6 sides in one go using some geometry shader trickery. Formerly you'd render each side one at a time which means running through a lot of calculations six times. This new method is suppose to be hella faster.
Up to this point I have no exp with geometry shaders so I plan on giving this a try.


Aside from cubic stuff, I did a lot of optimizing. Shader data being passed around goes through a packing and padding process. You can read more details and see some nice visualizations here. The brief version is that it's optimal to pass data around in hunks of 4 floats at a time. Most graphics data is in this format already (e.g. RGBA, 4D Vector, 4x4 Matrix etc.) so it makes sense to engineer the hardware around this fact.

So I've re-organized my data in this way. An example would be instead of sending two values about a light in two float calls. I'm sending both as a 4D vector where x is one value and y is another. And you're thinking, "but you're not even using z and w? That's a waste!". That's what I thought at first as well but because this is an array of light data, those two float calls would be padded into float4's anyway resulting in 8 floats being passed instead of the 4 I have now! So by doing that I cut the bandwidth in half. And I can later add two more floats to that 4D Vector at no cost.
I've been applying this thinking to everything now.

Here you can see how I've been commenting my declarations so I can keep track of what goes where. These are values in the shaders that will be filled in by my C++ code. As you can see I've got some Unused slots available for any new features.
Code:
float4        use_texture;                //.x    map_color            .y    map_normal        .z    map_alpha        .w    Unused
float4        surface_attributes_subset;  //.x    Spec Intensity       .y    Spec Power        .z    Spec Glare       .w    Reflectivity
float4        surface_attributes_face;    //.x    Emissive Intensity   .y    Unused            .z    Unused           .w    Unused



In other shader optimization news I've consolidated many of the shader "techniques". These are sets of Vertex/Geometry/Pixel/Etc shader functions in an fx file. It's easy to end up with a shit load of these for every type of surface you can imagine. The problem is you end up with a lot of redundancy. Like having two pixel shaders that are identical except one uses a normal map and the other doesn't, when a single shader could decide to get a normal from a map or a vertex shader on it's own.
The DirectX9 version of Diversion was really bad about this. Not only that but I was supporting two shader model versions in three fx files! So I had three files full of spaghetti code where if I made a change to one function I likely needed to do the same thing in like five others as well.
No more of that bullshit!
Fact is, I took a minor performance hit with the additional logic but having manageable code is more important to me then worrying about a 3% loss in speed. I have more then made up for that loss elsewhere.


-Bean
_________________
Kevin Reems | Nuclear Playground | Solid Driver
View user's profile Send private message Visit poster's website
Bean
Admin

Joined: 20 Aug 2005
Posts: 3776

PostPosted: Mon Jul 01, 2013 8:26 pm    Post subject: Reply with quote

I've been working on how the sky is rendered :)



No more color tables for day/night shifts, light colors, sky colors and their intensities.
I'm now defining a few key colors and using linear interpolation to to fill in the rest. This makes it extremely easy to modify any of the above. This will greatly come in handy in other games were I want to dynamically change things e.g. You visit a new planets, each with different atmospheric characteristics.

I've been up to other tweaks and fixes but they're mostly boring. Most of the shader stuff is done. I need to get render to texture working again and fix that damn buffer overflow. Once those are done I'll be shifting focus back to the game.


-Bean
_________________
Kevin Reems | Nuclear Playground | Solid Driver
View user's profile Send private message Visit poster's website
Bean
Admin

Joined: 20 Aug 2005
Posts: 3776

PostPosted: Wed Jul 03, 2013 3:52 pm    Post subject: Reply with quote

Oh joy I fixed that memory issue!
That was a bitch. It only happened in release builds so I couldn't use a debugger. Almost every code addition I did would shift the memory issue to somewhere I couldn't find. But in the end I narrowed it down to a (surprise) string problem. I was calling atof on a string that did not have a null at the end. Apparently this causes undefined behavior. In my case that would mean anything from volume settings being set to impossible values to flashlights not working.


-Bean
_________________
Kevin Reems | Nuclear Playground | Solid Driver


Edited by Bean on Wed Jul 03, 2013 8:50 pm; edited 1 time
View user's profile Send private message Visit poster's website
Sirocco
Moderator

Joined: 19 Aug 2005
Posts: 9459
Location: Not Finland
PostPosted: Wed Jul 03, 2013 4:30 pm    Post subject: Reply with quote

Quote:

I was calling atof on a string that did not have a null at the end.


Heheheh... no wonder it was random!

Glad you knocked that out.
_________________
NoOP / Reyn Time -- The $ is screwing everyone these days. (0xDB)
View user's profile Send private message Visit poster's website
Bean
Admin

Joined: 20 Aug 2005
Posts: 3776

PostPosted: Mon Jul 08, 2013 11:41 pm    Post subject: Reply with quote

Today I added support for parallax mapping. This isn't something Diversion will make heavy use of but the majority of the code is already there for normal mapping so it wasn't hard to add in. Most of my work today was spent experimenting with the maps themselves.

For those not familiar. Normal mapping defines the angle of a point on a surface. Parallax mapping defines the height of a point off a surface. When you combine these two you get some really convincing surface features.


These buttons are on a flat quad. They appear to stick out even as the camera moves around.



Normal Mapping (screenshot from a few days ago):


Normal & Parallax Mapping (Same location shot today):


The effect isn't as noticeable here (especially when not in motion). I also didn't want to make the newbie mistake of overdoing it. I'm sure you all recall the many games that came out some years ago that all had shiny plastic looking everything. I want my rendering to look good without it being obvious why. In fact, the player shouldn't even care why, they should be playing the damn game.


-Bean
_________________
Kevin Reems | Nuclear Playground | Solid Driver
View user's profile Send private message Visit poster's website
Sirocco
Moderator

Joined: 19 Aug 2005
Posts: 9459
Location: Not Finland
PostPosted: Tue Jul 09, 2013 6:45 pm    Post subject: Reply with quote

Quote:

I'm sure you all recall the many games that came out some years ago that all had shiny plastic looking everything.


Parallax is great for brick walls :)

But yeah, not everything needs to look slick and shiny. You've got the right idea. I can't wait to see the upgrades!
_________________
NoOP / Reyn Time -- The $ is screwing everyone these days. (0xDB)
View user's profile Send private message Visit poster's website
Alex
Developer

Joined: 05 Sep 2005
Posts: 1159

PostPosted: Wed Jul 10, 2013 6:26 am    Post subject: Reply with quote

I don't know much about these things, but perhaps there's a way to add some sort of dust mask thing that dulls the effects in random areas.. Having surfaces look too uniform in any kind of way can be a bit odd.
View user's profile Send private message Visit poster's website
Reply to topic GDR Forum Index -> Game Developer's Refuge -> Development Log - Mutiny Engine Page Previous  1, 2, 3 ... , 13, 14, 15  Next
Game Developer's Refuge
is proudly hosted by,

HostGator

All trademarks and copyrights on this page are owned by their respective owners. All comments owned by their respective posters.
phpBB code © 2001, 2005 phpBB Group. Other message board code © Kevin Reems.