I don't care how you get it, go play Aquaria. The only 2D game that's better then Aquaria is Braid, and yet very few people have played or even heard of it. It's quite simply one of the most incredible masterpieces in game design, music and art that I have ever seen. If you don't like all the exploring, then use a walkthrough, but somehow, just play the game and beat it, preferably finding the secret ending.
Lost to the Waves - Aquaria OST
What is it? Aquaria is like Super Metroid, except underwater. It starts out with an emphasis on puzzle solving, moves into lots of exploring amazing places (kelp forest, the surface, a frozen lake, etc.), and then concludes with you diving deep into the depths of The Abyss, fighting hordes of horrors that will haunt your nightmares for weeks, with everything in the game being tied together in a brilliant ending to its plot.
Hints: Dashing allows you to cling to walls underwater, and repeatedly dashing/clinging is a good way to move rapidly when you don't have the Fish form.
If you don't know where to go next, its always a place you haven't explored yet, and it will always be easily accessible with the powers you currently have. The white areas on the map are the places you've explored - use the map to your advantage.
Numbers 1-8 on the keyboard switch forms.
If you get stuck long enough the game will highlight which map section your supposed to go to.
The game encourages you to look for treasures, but never adequately explains how valuable some of them can be. Certain treasures are wearable and give you things like defense bonus, and others have more interesting effects (there is a pot that gives you infinite fish meat for your kitchen).
Do as much optional stuff as you can find, because they often give you extremely valuable abilities, so valuable its often not immediately obvious that they ARE optional.
When exploring, 95% of the time there is a reason something is on the map. If you see a place or cave you haven't explored and it doesn't go anywhere, there is something valuable inside it. The only exception is when there are multiple paths to get to a location, in which case I've found that it usually doesn't matter. Your looking for places that are dead ends. Dead ends are never just dead ends; there is ALWAYS something there. Just remember that this is an adventure game, and that being curious is usually rewarded if the hordes of enemy monsters don't rip your face off.
Energy form is a viable weapon even on the very last boss. In fact, despite Energy Form being the first form you get, eating a Spicy Hand Roll turns it into the second most powerful attack in the game (THE most powerful if you go by damage per second). If you want something dead, blast it with energy form.
December 28, 2010
November 18, 2010
The IM Failure
This is completely insane. First, Microsoft has rendered its new Windows Live Messenger (previously MSN messenger) almost completely useless.
Naturally I wasn't able to put up with this for very long and moved to pidgin. Pidgin has many issues of its own, including an ass-backwards UI design and this really annoying tendency to create a new popup window to confirm every single file transfer, among other truly bizarre UI design elements. I was willing to put up with these, because honestly pidgin is designed for Linux and just happens to work on windows too, and their libpurple library is the basis of almost all open-source IM clients.
Pidgin, however, has now stopped working as well. It now spastically refuses to connect to the MSN service because the SSL certificate is invalid. I can appreciate it trying to protect my privacy, but there is no way to override this. So, pidgin is now out of the question.
Well, how about Trillian? During its installation, Trillian informed me that "The Adobe Flash 9.0 plugin for Internet Explorer could not be found. This plugin is required for some of Trillian's features."
Trillian is no longer on my computer.
After digging around on wikipedia, I came up with Miranda IM, which seemed to be my last hope for a multi-protocol service that didn't suck total ass. It supported WLM, AIM, and... not google talk? Not XMPP, the most useful extensible open source protocol? Um, ok. Its UI design is more compact then Pidgin but arguably even worse, and it doesn't support custom emoticons or hardly ANYTHING on the WLM protocol. It served its purpose by at least letting me log the fuck in like I wanted to, though.
This is driving me up the wall. If anything else happens, I'm going to snap and simply make time to work on my own IM client implementation that doesn't have glaring design flaws like every single other one. Honestly, requiring the Internet Explorer flash plugin to run everything? What the fuck are they smoking? What the hell is wrong with these developers?!
If only D had a good development environment, then I could write my IM client in it.
- No more handwriting
- No more setting your name to anything other then your first and last name.
- All links you click on redirect you to a page from Microsoft warning you about the dangers of the internet, requiring you to click a link to proceed.
- All photosharing is now incompatible with previous versions of messenger, and instead of actually just sending the file, it will instead fail completely.
- Any youtube video, image, or any other link you copy paste into the window will automatically trigger a sharing session whether you like it or not.
- It will, at times, randomly decide none of your messages are getting through.
- You can no longer have a one-sided webcam session. WLM will simply leave your webcam as a giant, useless blank image underneath your conversation partner, demanding that you buy a webcam.
- It's new emoticons must have been influenced by H.R.Giger (you can't turn them off and leave custom ones on).
Naturally I wasn't able to put up with this for very long and moved to pidgin. Pidgin has many issues of its own, including an ass-backwards UI design and this really annoying tendency to create a new popup window to confirm every single file transfer, among other truly bizarre UI design elements. I was willing to put up with these, because honestly pidgin is designed for Linux and just happens to work on windows too, and their libpurple library is the basis of almost all open-source IM clients.
Pidgin, however, has now stopped working as well. It now spastically refuses to connect to the MSN service because the SSL certificate is invalid. I can appreciate it trying to protect my privacy, but there is no way to override this. So, pidgin is now out of the question.
Well, how about Trillian? During its installation, Trillian informed me that "The Adobe Flash 9.0 plugin for Internet Explorer could not be found. This plugin is required for some of Trillian's features."
Trillian is no longer on my computer.
After digging around on wikipedia, I came up with Miranda IM, which seemed to be my last hope for a multi-protocol service that didn't suck total ass. It supported WLM, AIM, and... not google talk? Not XMPP, the most useful extensible open source protocol? Um, ok. Its UI design is more compact then Pidgin but arguably even worse, and it doesn't support custom emoticons or hardly ANYTHING on the WLM protocol. It served its purpose by at least letting me log the fuck in like I wanted to, though.
This is driving me up the wall. If anything else happens, I'm going to snap and simply make time to work on my own IM client implementation that doesn't have glaring design flaws like every single other one. Honestly, requiring the Internet Explorer flash plugin to run everything? What the fuck are they smoking? What the hell is wrong with these developers?!
If only D had a good development environment, then I could write my IM client in it.
October 18, 2010
How To Train Your Dragon
...takes its place as one of my favorite movies of all time. The characters are brilliant, the story is superb, the music is divine, and the cinematography is breathtaking. It was so good it gave me the last bit of inspiration I needed to make this (WIP), and as a result my attempts at finishing CEGUI this weekend were thrown out the window. Tomorrow I'll try a first draft of the drums and possible a couple of effects before extending the main chorus. These new samples are amazing - it's like I can do everything I ever wanted. Now the hard part is figuring out what I want. I threw in a bunch of binaural sounds thanks to the free sound project, which has been added to my mental list of useful resource sites.
I finally got the entire CEGUI dependency chain statically recompiled and have wrapped everything into a giant megadll with no dependencies. I then discovered that for all this time, PlaneShader's Release mode had been defaulting to __fastcall. I have no idea how it has even been compiling with my other projects; the reason I don't change the calling convention is because it breaks absolutely everything. For good measure I threw a few extra BSS_FASTCALL into the really high traffic functions.
So, if it weren't for my sudden musical inspiration, I'd be rebuilding the GUI in CEGUI, although I also need to figure out how the heck it renders text, because I need a text renderer, and I can't just require CEGUI's to be used. I need to build a very basic but supercrazyfast text renderer, because holy shit is directX bad at that.
After that I finalize a midpoint release of planeshader and move to reimplementing Decoherence's GUI in CEGUI, then FINALLY get to those joints that were supposed to be done ages ago.
Of course, I also have a midterm wednesday. Fuck.
I finally got the entire CEGUI dependency chain statically recompiled and have wrapped everything into a giant megadll with no dependencies. I then discovered that for all this time, PlaneShader's Release mode had been defaulting to __fastcall. I have no idea how it has even been compiling with my other projects; the reason I don't change the calling convention is because it breaks absolutely everything. For good measure I threw a few extra BSS_FASTCALL into the really high traffic functions.
So, if it weren't for my sudden musical inspiration, I'd be rebuilding the GUI in CEGUI, although I also need to figure out how the heck it renders text, because I need a text renderer, and I can't just require CEGUI's to be used. I need to build a very basic but supercrazyfast text renderer, because holy shit is directX bad at that.
After that I finalize a midpoint release of planeshader and move to reimplementing Decoherence's GUI in CEGUI, then FINALLY get to those joints that were supposed to be done ages ago.
Of course, I also have a midterm wednesday. Fuck.
September 13, 2010
Album For Sale! [Renascent]
Due to Bandcamp's sudden threat to turn all of my free downloads into paid ones, I decided to go ahead and start selling my music properly. Renascent is now available for $3, or about as much as a gallon of milk costs. It contains remastered, super high quality (lossless if you choose to download in FLAC format) versions of all 14 songs, in addition to the original FLP project files used to create them. If you have ever wondered how I made a particular song, this might be another incentive to purchase the album. Note that these FLPs are released under the Creative Commons Attribution-NonCommercial-NoDerivs 3.0 license, so you can't go running off with them like free candy.
Track List:
1. On The Edge (2:56)
2. Renascent (4:06)
3. The Boundless Sea (6:49)
4. Duress (2:40)
5. Seaside Lookout (4:54)
6. Sapphire [Redux] (2:20)
7. Absolutia (3:04)
8. The Plea (3:46)
9. Now (2:34)
10. Alutia (4:10)
11. Rite (5:20)
12. Crystalline Cloudscape (4:04)
13. All Alone (3:06)
14. SunStorm (4:12)
Total Time: 56:44
Listen and Buy It Here
Track List:
1. On The Edge (2:56)
2. Renascent (4:06)
3. The Boundless Sea (6:49)
4. Duress (2:40)
5. Seaside Lookout (4:54)
6. Sapphire [Redux] (2:20)
7. Absolutia (3:04)
8. The Plea (3:46)
9. Now (2:34)
10. Alutia (4:10)
11. Rite (5:20)
12. Crystalline Cloudscape (4:04)
13. All Alone (3:06)
14. SunStorm (4:12)
Total Time: 56:44
Listen and Buy It Here
September 5, 2010
Portland and Zaron
On Wednesday, Zaron mentioned that he was heading to Kumoricon (anime convention) in Portland, Oregon on Friday, but that the actual convention didn't start until Saturday. So, I figured this was a great time to be a gigantic hilarious fanboy and be like "I CAN MEET ZARON IN PERSON!!!!" Getting there and back in one day wasn't feasible, so they said I could crash with them for a night, and I ended up riding a train there.
I took a bus to the train and everything was peachy, except the guy next to me was apparently very social. As a result, me not talking vary much and just looking out the window eventually prompted him to say something to the effect of, "Do I make you nervous?" Well, now you sure fucking do!
Got off the train, found light rail station, tried to call Zaron but he couldn't hear me, then proceeded to be scared shitless of what I can only presume are gang members hanging around there with boomboxes cranking out distorted hiphop. Meanwhile, I am a skinny, white middle-class male with luggage who clearly doesn't know where he is. Zaron called back while I was there, having apparently moved to a quiet location, and I told him that I would just call him back when I got to the hotel. Now he knows why I said that.
I was glad I scoped out the area with Google streetview first because my stop wasn't named what I thought it would be named. Finding the hotel with the convention, however, was very easy.
"Is that.... Is that supposed to be Cloud? Is that hair blue? What the f-"
The convention was a hilarious disaster. Zaron's friend, LardMan, recorded them starting at the beginning and walking all the way down to the end. It took 2 and a half minutes. They had been there about a half hour before I arrived, at which point I just laughed and wandered around the convention, staring at all the crazy costumes. There was inu'yasha, umbreon, like 4 people with keyblades, and this guy with a GIANT PLASTIC SUNFLOWER. A few girls with cat ears and the wackiest hairdos I have ever seen. I thought some of them had found a way to violate the laws of physics.
Almost the first thing Zaron gave me was this. The night before they left for the convention, he said he was drawing something and couldn't find his markers and would have to use crayola so I laughed at his misfortune and wondered aloud what pointless thing he was drawing. Then he told me "You wouldn't be laughing if you knew what it was." Hence, upon viewing the amazing masterpiece that he made just for me, I, of course, exclaimed "YOU ASSHOLE!"
The irony.
After it took them like an hour to get all registered, we all went out for dinner. I convinced them to go to subway (a whole 2 blocks away) and learned that I walk very fast when excited. Zaron let me flip through his sketchbook and various other things while I asked embarrassing questions and fanboy'd over all the pictures. Their printer failed in amazing new ways because the red ink didn't work, so they are probably printing out greyscale business cards now. Then we went to see SCOTT PILGRIM which is one of the greatest movies ever made and anyone that says otherwise will DIE in a HILARIOUS MANNER.
Then we did stuff and went to sleep.
We were woken up by the cleaning lady going "Oops sorry!" as she closed the door on the hotel room. We then watched a stupid TV show for an hour and I said goodbye, went downstairs, slipped through the crowds and vanished on to the street. Got to the train station and hour early, did stuff, went home, everything was peachy.
And then I realized that I had paid for absolutely nothing the entire time because the only money in my wallet was enough for a bus fare. THEY are the ones with money problems, not me :(
BUT HEY I GOTPORN A HAND-DRAWN PICTURE FROM ZARON SO WOOHOO!
I am totally not creepy at all.
I took a bus to the train and everything was peachy, except the guy next to me was apparently very social. As a result, me not talking vary much and just looking out the window eventually prompted him to say something to the effect of, "Do I make you nervous?" Well, now you sure fucking do!
Got off the train, found light rail station, tried to call Zaron but he couldn't hear me, then proceeded to be scared shitless of what I can only presume are gang members hanging around there with boomboxes cranking out distorted hiphop. Meanwhile, I am a skinny, white middle-class male with luggage who clearly doesn't know where he is. Zaron called back while I was there, having apparently moved to a quiet location, and I told him that I would just call him back when I got to the hotel. Now he knows why I said that.
I was glad I scoped out the area with Google streetview first because my stop wasn't named what I thought it would be named. Finding the hotel with the convention, however, was very easy.
"Is that.... Is that supposed to be Cloud? Is that hair blue? What the f-"
The convention was a hilarious disaster. Zaron's friend, LardMan, recorded them starting at the beginning and walking all the way down to the end. It took 2 and a half minutes. They had been there about a half hour before I arrived, at which point I just laughed and wandered around the convention, staring at all the crazy costumes. There was inu'yasha, umbreon, like 4 people with keyblades, and this guy with a GIANT PLASTIC SUNFLOWER. A few girls with cat ears and the wackiest hairdos I have ever seen. I thought some of them had found a way to violate the laws of physics.
Almost the first thing Zaron gave me was this. The night before they left for the convention, he said he was drawing something and couldn't find his markers and would have to use crayola so I laughed at his misfortune and wondered aloud what pointless thing he was drawing. Then he told me "You wouldn't be laughing if you knew what it was." Hence, upon viewing the amazing masterpiece that he made just for me, I, of course, exclaimed "YOU ASSHOLE!"
The irony.
After it took them like an hour to get all registered, we all went out for dinner. I convinced them to go to subway (a whole 2 blocks away) and learned that I walk very fast when excited. Zaron let me flip through his sketchbook and various other things while I asked embarrassing questions and fanboy'd over all the pictures. Their printer failed in amazing new ways because the red ink didn't work, so they are probably printing out greyscale business cards now. Then we went to see SCOTT PILGRIM which is one of the greatest movies ever made and anyone that says otherwise will DIE in a HILARIOUS MANNER.
Then we did stuff and went to sleep.
We were woken up by the cleaning lady going "Oops sorry!" as she closed the door on the hotel room. We then watched a stupid TV show for an hour and I said goodbye, went downstairs, slipped through the crowds and vanished on to the street. Got to the train station and hour early, did stuff, went home, everything was peachy.
And then I realized that I had paid for absolutely nothing the entire time because the only money in my wallet was enough for a bus fare. THEY are the ones with money problems, not me :(
BUT HEY I GOT
I am totally not creepy at all.
August 25, 2010
WavSaver
There is a documented bug in windows 7 that has pissed me off a few times and recently crippled a friend of mine, where a .wav file with corrupted metadata causes explorer.exe to go into an infinite loop. My friend has a large collection of wavs that somehow got corrupted, so I wrote this program to strip them of all metadata. Due to the nature of the bug, the program can't delete them (you must use the command prompt to do that), but rather creates a folder called "safe" with all the stripped wav files inside of it.
Hosted in case anyone else has corrupted wav files they need to save. Just stick it inside a folder and run it - it'll automatically strip all wav files in the same folder as the executable.
WavSaver
Hosted in case anyone else has corrupted wav files they need to save. Just stick it inside a folder and run it - it'll automatically strip all wav files in the same folder as the executable.
WavSaver
August 17, 2010
Pixel Perfect Hit Testing
After beating World of Goo after stabilizing things in my game and renaming it, I wondered how easy it was to decompile C# applications and simultaneously thought this would be a great opportunity to get pixel perfect hit testing to work on my engine. So, I decompiled GearGOD's composition example and quickly discovered that his method of detecting mouse messages was... well something completely different then his extremely bad attempt at explaining it to me had suggested.
Basically, he did not run into the window event issues that I was having because... he didn't use them. XNA keeps track of the mouse coordinates in its own separate update function, most likely using its special input hook, and hence there is no mousemove to keep track of. Instead of occurring when the user moves the mouse, the hit tests occur every single frame.
Hence, once you have utilized WS_EX_TRANSPARENT|WS_EX_COMPOSITED|WS_EX_LAYERED to make your window click-through-able, you then simply do a hit test on a given pixel after everything has been drawn, and swap out WS_EX_TRANSPARENT depending on the value. GetCursorPos and ScreenToClient will get the mouse coordinates you need, although they can be off your app window entirely so check for that too.
Basically, he did not run into the window event issues that I was having because... he didn't use them. XNA keeps track of the mouse coordinates in its own separate update function, most likely using its special input hook, and hence there is no mousemove to keep track of. Instead of occurring when the user moves the mouse, the hit tests occur every single frame.
Hence, once you have utilized WS_EX_TRANSPARENT|WS_EX_COMPOSITED|WS_EX_LAYERED to make your window click-through-able, you then simply do a hit test on a given pixel after everything has been drawn, and swap out WS_EX_TRANSPARENT depending on the value. GetCursorPos and ScreenToClient will get the mouse coordinates you need, although they can be off your app window entirely so check for that too.
if(_dxDriver->MouseHitTest(GetMouseExact()))To get the pixel value, its a bit trickier. You have two options - you can make a lockable render target, or you can copy the render target to a temporary texture and lock that instead. The DirectX docs said that locking a render target is so expensive you should just copy it over, but after GearGOD went and yelled at me I tested the lockable render target method, and it turns out to be significantly faster. Futher speed gains can be achieved by making a 1x1 lockable render target and simply copying a single pixel from the backbuffer into the lockable render target and testing that.
SetWindowLong(_window,GWL_EXSTYLE,((GetWindowLong(_window,GWL_EXSTYLE))&(~WS_EX_TRANSPARENT)));
else
SetWindowLong(_window,GWL_EXSTYLE,((GetWindowLong(_window,GWL_EXSTYLE))|WS_EX_TRANSPARENT));
void cDirectX_real::ActivateMouseCheck()Using this method, the performance hit is 620 FPS to 510 FPS at 1280x1024, which is fairly reasonable. However, my Planeshader SDK example is still at 0.9.71, which does not have this updated, fast version, so it will be using a much slower method to do it. The end result is the same though.
{
if(_mousehittest) _mousehittest->Release();
DX3D_device->CreateRenderTarget(1,1,_holdparams.BackBufferFormat,D3DMULTISAMPLE_NONE,0,TRUE,&_mousehittest,NULL);
}
bool cDirectX_real::MouseHitTest(const cPositioni& mouse)
{
RECT rect = { mouse.x,mouse.y,mouse.x+1,mouse.y+1 };
DX3D_device->StretchRect(_backbuffer,&rect,_mousehittest,0,D3DTEXF_NONE);
if(mouse.x<0 || mouse.y < 0 || mouse.x > (int)_width || mouse.y > (int)_height)
return false; //off the stage
D3DLOCKED_RECT desc = { 0,0 };
if(FAILED(_mousehittest->LockRect(&desc, 0,D3DLOCK_READONLY)))
return true;
unsigned char color = (*((unsigned long*)desc.pBits))>>24;
_mousehittest->UnlockRect();
return color>_alphacutoff;
}
August 13, 2010
August 10, 2010
8-bit color cycling
Someone linked me to this awesome webpage that uses HTML5 to do 8-bit palette color cycling using Mark Ferrari's technique and art. I immediately wanted to implement it in my graphics engine, but soon realized that the technique is so damn old that no modern graphics card supports it anymore. So, I have come up with a pixel shader that creates the same functionality, either by having one image with an alpha channel containing the palette indices and a separate texture acting as the palette, or you can combine them into a single image. This is supposed to support variable palette sizes (up to 256) but I haven't had much ability to test the thing because its so damn hard to get the images formatted correctly. So while all of these variations i'm about to show you should work there is no guarantee they necessarily will.
Video Link
8-bit cycling multi-image
ps 2.0 HLSL
// Global variables
float frame;
float xdim;
float xoff;
// Samplers
sampler s0 : register(s0);
sampler s1 : register(s1);
float4 ps_main( float2 texCoord : TEXCOORD0 ) : COLOR0
{
float4 mainlookup = tex2D( s0, texCoord );
float2 palette = float2(mainlookup.a*xdim + xoff,frame);
mainlookup = tex2D(s1, palette);
return mainlookup;
}
ps 1.4 ASM
ps.1.4
texld r0, t0
mad r0.x, r0.a, c1, c2
mov r0.y, c0
phase
texld r1, r0
mov r0, r1
It is also possible to write the shader in ps.1.1 but it requires crazy UV coordinate hacks.
frame is a value from 0.0 to 1.0 (ps.1.4 will not allow you to wrap this value, but ps.2.0 will) that specifies how far through the palette animation you are.
xdim = 255/(width of palette)
xoff = 1/(2*(width of palette))
Note that all assembly registers correspond to a variable in order of its declaration. So, c0 = frame, c1 = xdim, c2 = xoff.
8-bit cycling single-image
ps 2.0 HLSL
// Global variables
float frame;
float xdim;
float xoff;
// Samplers
sampler s0 : register(s0);
float4 ps_main( float2 texCoord : TEXCOORD0 ) : COLOR0
{
float4 mainlookup = tex2D(s0, texCoord );
float2 palette = float2(mainlookup.a*xdim + xoff,frame);
mainlookup = tex2D(s0, palette);
mainlookup.a = 1.0f;
return mainlookup;
}
ps 1.4 ASM
ps.1.4
def c3, 1.0, 1.0, 1.0, 1.0
texld r0, t0
mov r1, r0
mad r1.x, r1.a, c1, c2
mov r1.y, c0
phase
texld r0, r1
mov r0.a, c3
frame is now a value between 0.0 and (palette height)/(image height).
xdim = 255/(image width)
xoff = 1/((image width)*2)
24-bit cycling
ps 2.0 HLSL
// Global variables
float frame;
float xdim;
float xoff;
// Samplers
sampler s0 : register(s0);
sampler s1 : register(s1);
float4 ps_main( float2 texCoord : TEXCOORD0 ) : COLOR0
{
float4 mainlookup = tex2D( s0, texCoord );
float2 palette = float2(mainlookup.a*xdim + xoff,frame*fElapsedTime);
float4 lookup = tex2D(s1, palette);
return lerp(mainlookup,lookup,lookup.a);
}
ps 1.4 ASM
ps.1.4
def c3, 1.0, 1.0, 1.0, 1.0
texld r0, t0
mov r2, c0
mad r2.x, r0.a, c1, c2
phase
texld r1, r2
mov r0.a, c3
lrp r0, r1.a, r1, r0
Variables are same as 8-bit multi-image.
All this variation does is make it possible to read an alpha value off of the palette texture, which is then interpolated between the palette and the original color value. This way, you can specify 0 alpha palette indexes to have full 24-bit color, and then just use the palette swapping for small, animated areas.
If I had infinite time, I'd write a program that analyzed a palette based image and re-assigned all of the color indexes based on proximety, which would make animating using this method much easier. This will stay as a proof of concept until I get some non-copyrighted images to play with, at which point I'll probably throw an implementation of it inside my engine.
Video Link
8-bit cycling multi-image
ps 2.0 HLSL
// Global variables
float frame;
float xdim;
float xoff;
// Samplers
sampler s0 : register(s0);
sampler s1 : register(s1);
float4 ps_main( float2 texCoord : TEXCOORD0 ) : COLOR0
{
float4 mainlookup = tex2D( s0, texCoord );
float2 palette = float2(mainlookup.a*xdim + xoff,frame);
mainlookup = tex2D(s1, palette);
return mainlookup;
}
ps 1.4 ASM
ps.1.4
texld r0, t0
mad r0.x, r0.a, c1, c2
mov r0.y, c0
phase
texld r1, r0
mov r0, r1
It is also possible to write the shader in ps.1.1 but it requires crazy UV coordinate hacks.
frame is a value from 0.0 to 1.0 (ps.1.4 will not allow you to wrap this value, but ps.2.0 will) that specifies how far through the palette animation you are.
xdim = 255/(width of palette)
xoff = 1/(2*(width of palette))
Note that all assembly registers correspond to a variable in order of its declaration. So, c0 = frame, c1 = xdim, c2 = xoff.
8-bit cycling single-image
ps 2.0 HLSL
// Global variables
float frame;
float xdim;
float xoff;
// Samplers
sampler s0 : register(s0);
float4 ps_main( float2 texCoord : TEXCOORD0 ) : COLOR0
{
float4 mainlookup = tex2D(s0, texCoord );
float2 palette = float2(mainlookup.a*xdim + xoff,frame);
mainlookup = tex2D(s0, palette);
mainlookup.a = 1.0f;
return mainlookup;
}
ps 1.4 ASM
ps.1.4
def c3, 1.0, 1.0, 1.0, 1.0
texld r0, t0
mov r1, r0
mad r1.x, r1.a, c1, c2
mov r1.y, c0
phase
texld r0, r1
mov r0.a, c3
frame is now a value between 0.0 and (palette height)/(image height).
xdim = 255/(image width)
xoff = 1/((image width)*2)
24-bit cycling
ps 2.0 HLSL
// Global variables
float frame;
float xdim;
float xoff;
// Samplers
sampler s0 : register(s0);
sampler s1 : register(s1);
float4 ps_main( float2 texCoord : TEXCOORD0 ) : COLOR0
{
float4 mainlookup = tex2D( s0, texCoord );
float2 palette = float2(mainlookup.a*xdim + xoff,frame*fElapsedTime);
float4 lookup = tex2D(s1, palette);
return lerp(mainlookup,lookup,lookup.a);
}
ps 1.4 ASM
ps.1.4
def c3, 1.0, 1.0, 1.0, 1.0
texld r0, t0
mov r2, c0
mad r2.x, r0.a, c1, c2
phase
texld r1, r2
mov r0.a, c3
lrp r0, r1.a, r1, r0
Variables are same as 8-bit multi-image.
All this variation does is make it possible to read an alpha value off of the palette texture, which is then interpolated between the palette and the original color value. This way, you can specify 0 alpha palette indexes to have full 24-bit color, and then just use the palette swapping for small, animated areas.
If I had infinite time, I'd write a program that analyzed a palette based image and re-assigned all of the color indexes based on proximety, which would make animating using this method much easier. This will stay as a proof of concept until I get some non-copyrighted images to play with, at which point I'll probably throw an implementation of it inside my engine.
August 8, 2010
P != NP
A paper has been published proving that P != NP.
http://www.scribd.com/doc/35539144/pnp12pt
So far, peer review has confirmed this finding and no errors have been found. This paper answers a question that has been called the most important question in the field of computer science.
http://en.wikipedia.org/wiki/P_versus_NP_problem
P != NP has long been suspected by the majority of computer scientists, due to the fact that no polynomial time algorithms have been discovered for more then 3000 known NP complete problems. The consequences of proving either answer are explained in the paper's introduction:
If this paper continues to stand up to peer scrutiny and is declared a valid answer to the P vs. NP problem, this could very well mark a major historic moment in computer science and mathematics.
http://www.scribd.com/doc/35539144/pnp12pt
So far, peer review has confirmed this finding and no errors have been found. This paper answers a question that has been called the most important question in the field of computer science.
http://en.wikipedia.org/wiki/P_versus_NP_problem
P != NP has long been suspected by the majority of computer scientists, due to the fact that no polynomial time algorithms have been discovered for more then 3000 known NP complete problems. The consequences of proving either answer are explained in the paper's introduction:
Later, Karp [Kar72] showed that twenty-one well known combinatorial problems, which include TRAVELLING SALESMAN, CLIQUE, and HAMILTONIAN CIRCUIT, were also NP-complete. In subsequent years, many problems central to diverse areas of application were shown to beNP-complete (see [GJ79] for a list). If P!=NP, we could never solve these problems efficiently. If, on the other hand P=NP, the consequences would be even more stunning, since every one of these problems would have a polynomial time solution. The implications of this on applications such as cryptography, and on the general philosophical question of whether human creativity can be automated, would be profound.
If this paper continues to stand up to peer scrutiny and is declared a valid answer to the P vs. NP problem, this could very well mark a major historic moment in computer science and mathematics.
August 6, 2010
Physics Networking
I'm still working on integrating physics into my game, but at some point here I am going to hit on that one major hurdle: syncing one physics environment with another that could be halfway across the globe. There are a number of ways to do this; some of them are bad, and some of them are absolutely terrible.
If any of you have played Transformice, you will know what I mean by terrible. While I did decompile the game, I never bothered to examine their networking code, but I would speculate that they are simply mass-updating all the clients and not properly interpolating received packets. Consequently, when things get laggy, everyone doesn't just hop around, they completely clip through objects, even ones that they should never clip though no matter what the other players are doing.
The question is, how do you properly handle physics networking without flooding the network with unnecessary packets, especially when you are dealing with very large numbers of physics objects spread across a large map with many people interacting in complex ways?
In a proper setup, a client processes input from the user, and sends this to the server. While it's waiting for a response, it will interpolate the physics by guessing what all the other players are doing. If there are no other players this interpolation can, for the most part, be considered to be perfectly accurate. This will generally hold true if there are players that are far enough away from each other they can't directly or indirectly influence each other's interpolation.
Meanwhile, the server receives the input a little while later - say, 150 ms. In an ideal scenario, the entire physics world is rewound 150 ms and then re-simulated taking into account the player's action. The server then broadcasts new physics locations and the player's action to all other clients.
All the other clients now get these packets another 150 ms later. In an ideal scenario, all they would need to know is that the other player pressed a button 300 ms ago, rewind the simulation that much, then re-simulate to take it into account.
We are, obviously, not in an ideal scenario.
What exactly does this entail? In any interpolation function, speed is gained by making assumptions that introduce possible errors. This error margin grows as the player has more and more objects he might have interacted with. However, this also applies to each physics object against each other physics object, and in turn each other physics object from that. Hence, this boils down to the math equation: p = nn!. That is a worst case scenario. Best case scenario is that none of the physics objects interact with each other, so error margin p = n and is therefore linear.
Hence, we now know that interaction with physics objects - or possibly, any sort of nonuniform physical force, like an explosion - is what creates the uncertainty problem. This is why the server always maintains its own physics world that is synced to everyone else, so that even if its not always right, its at least somewhat consistent. The question is, when do we need to send a physics packet, and when do we not need to send one?
If the player is falling through the air and jumps, and there is nothing he could possibly interact with, we can assume that any half-decent interpolation function will be almost perfect. Hence, we don't actually have to send any physics packets because the interpolation should be perfect. However, the more possible objects that are involved, the more uncertain things get and the more we need to send physics updates in case the interpolation functions get confused. In addition, we have to send packets for all affected objects as well. However, we only have to do this when the player gives input to the game. If the player's input does not change, then he is obeying the interpolation laws of everyone else and no physics update is needed, since the interpolation always assumes the player's input status has not changed.
Hence, in a semi-ideal scenario, we only have to send physics packets for the player and any physics objects he might have collided with, and only when the player changes their input status. Right?
But wait, that works for the server, but not all the other clients. Those clients receive those physics packets 150 ms late, and have to interpolate those as well, introducing more uncertainty that cannot be eliminated. In addition, we aren't even in a semi-ideal scenario - the interpolation functions become more unreliable over time regardless of the player's input status.
However, this uncertainty itself is still predictable. The more objects a player is potentially interacting with, the more uncertain those object states are. This grows exponentially when other players are in the mix because we simply cannot guess what they might do in those 150 ms.
Consequently, one technique would be to send physics updates for all potentially affected objects surrounding the player whenever the player changes their input or interacts with another physics object. This alone will only work when the player is completely alone, and even then require some intermediate packets for higher uncertainty. Hence, one can create an update pattern that looks something like this:
Physics packet rapidity: (number of potential interacting physics objects) * ( (other players) * (number of their potential interacting objects) )
More precise values could be attained by taking into account distance and understand exactly what and how the interpolation function is working. Building an interpolation function that is capable of rewinding and accurately re-simulating the small area around the player is obviously a very desirable course of action, because it removes the primary source of uncertainty and would allow for a lot more breathing room.
Either way, the general idea still stands - the closer your player is to a physics object, the more often that object (and the player) need to be updated. The closer your player is to another player, the update speed goes up exponentially. And always remember to send velocity data too!
I will make a less theoretical post on this after I've had an opportunity to do testing on what really works.
If any of you have played Transformice, you will know what I mean by terrible. While I did decompile the game, I never bothered to examine their networking code, but I would speculate that they are simply mass-updating all the clients and not properly interpolating received packets. Consequently, when things get laggy, everyone doesn't just hop around, they completely clip through objects, even ones that they should never clip though no matter what the other players are doing.
The question is, how do you properly handle physics networking without flooding the network with unnecessary packets, especially when you are dealing with very large numbers of physics objects spread across a large map with many people interacting in complex ways?
In a proper setup, a client processes input from the user, and sends this to the server. While it's waiting for a response, it will interpolate the physics by guessing what all the other players are doing. If there are no other players this interpolation can, for the most part, be considered to be perfectly accurate. This will generally hold true if there are players that are far enough away from each other they can't directly or indirectly influence each other's interpolation.
Meanwhile, the server receives the input a little while later - say, 150 ms. In an ideal scenario, the entire physics world is rewound 150 ms and then re-simulated taking into account the player's action. The server then broadcasts new physics locations and the player's action to all other clients.
All the other clients now get these packets another 150 ms later. In an ideal scenario, all they would need to know is that the other player pressed a button 300 ms ago, rewind the simulation that much, then re-simulate to take it into account.
We are, obviously, not in an ideal scenario.
What exactly does this entail? In any interpolation function, speed is gained by making assumptions that introduce possible errors. This error margin grows as the player has more and more objects he might have interacted with. However, this also applies to each physics object against each other physics object, and in turn each other physics object from that. Hence, this boils down to the math equation: p = nn!. That is a worst case scenario. Best case scenario is that none of the physics objects interact with each other, so error margin p = n and is therefore linear.
Hence, we now know that interaction with physics objects - or possibly, any sort of nonuniform physical force, like an explosion - is what creates the uncertainty problem. This is why the server always maintains its own physics world that is synced to everyone else, so that even if its not always right, its at least somewhat consistent. The question is, when do we need to send a physics packet, and when do we not need to send one?
If the player is falling through the air and jumps, and there is nothing he could possibly interact with, we can assume that any half-decent interpolation function will be almost perfect. Hence, we don't actually have to send any physics packets because the interpolation should be perfect. However, the more possible objects that are involved, the more uncertain things get and the more we need to send physics updates in case the interpolation functions get confused. In addition, we have to send packets for all affected objects as well. However, we only have to do this when the player gives input to the game. If the player's input does not change, then he is obeying the interpolation laws of everyone else and no physics update is needed, since the interpolation always assumes the player's input status has not changed.
Hence, in a semi-ideal scenario, we only have to send physics packets for the player and any physics objects he might have collided with, and only when the player changes their input status. Right?
But wait, that works for the server, but not all the other clients. Those clients receive those physics packets 150 ms late, and have to interpolate those as well, introducing more uncertainty that cannot be eliminated. In addition, we aren't even in a semi-ideal scenario - the interpolation functions become more unreliable over time regardless of the player's input status.
However, this uncertainty itself is still predictable. The more objects a player is potentially interacting with, the more uncertain those object states are. This grows exponentially when other players are in the mix because we simply cannot guess what they might do in those 150 ms.
Consequently, one technique would be to send physics updates for all potentially affected objects surrounding the player whenever the player changes their input or interacts with another physics object. This alone will only work when the player is completely alone, and even then require some intermediate packets for higher uncertainty. Hence, one can create an update pattern that looks something like this:
Physics packet rapidity: (number of potential interacting physics objects) * ( (other players) * (number of their potential interacting objects) )
More precise values could be attained by taking into account distance and understand exactly what and how the interpolation function is working. Building an interpolation function that is capable of rewinding and accurately re-simulating the small area around the player is obviously a very desirable course of action, because it removes the primary source of uncertainty and would allow for a lot more breathing room.
Either way, the general idea still stands - the closer your player is to a physics object, the more often that object (and the player) need to be updated. The closer your player is to another player, the update speed goes up exponentially. And always remember to send velocity data too!
I will make a less theoretical post on this after I've had an opportunity to do testing on what really works.
July 22, 2010
Assembly CAS implementation
inline unsigned char BSS_FASTCALL asmcas(int *pval, int newval, int oldval)
{
unsigned char rval;
__asm {
#ifdef BSS_NO_FASTCALL //if we are using fastcall we don't need these instructions
mov EDX, newval
mov ECX, pval
#endif
mov EAX, oldval
lock cmpxchg [ECX], EDX
sete rval // Note that sete sets a 'byte' not the word
}
return rval;
}
This was an absolute bitch to get working in VC++, so maybe this will be useful to someone, somewhere, somehow. The GCC version I based this off of can be found here.
Note that, obviously, this will only work on x86 architecture.
July 21, 2010
PlaneShader v0.9.7
PlaneShader is a high-speed 2D rendering engine, which at some point will have a lot of really cool features, but right now just has all the basics.
- Image manipulation
- Dynamic optimization
- Grouping and parent/child relationships
- Depth (z-axis)
- Advanced culling
- Tilesets
- Animation
- Sprite animation generation
- GUI system
- Masking
- Particle system (Not GPU based yet)
- Limited shader support as of v0.9
- Gradients
- Vista/win7 Desktop composition
Cooler features like a built in lighting system, per-image shaders and distortion will be put in later after I have a working demo of my game. I have not had the time to write a functional .net wrapper for the engine yet, sorry. Keep in mind that this is an alpha test, and while it probably doesn't have any serious leaks or bugs, I would not recommend using it in production.
Precompiled exes can be found in examples/bin.
PlaneShader.zip
- Image manipulation
- Dynamic optimization
- Grouping and parent/child relationships
- Depth (z-axis)
- Advanced culling
- Tilesets
- Animation
- Sprite animation generation
- GUI system
- Masking
- Particle system (Not GPU based yet)
- Limited shader support as of v0.9
- Gradients
- Vista/win7 Desktop composition
Cooler features like a built in lighting system, per-image shaders and distortion will be put in later after I have a working demo of my game. I have not had the time to write a functional .net wrapper for the engine yet, sorry. Keep in mind that this is an alpha test, and while it probably doesn't have any serious leaks or bugs, I would not recommend using it in production.
Precompiled exes can be found in examples/bin.
PlaneShader.zip
July 17, 2010
Desktop Composition
I have succeeded in directly compositing my graphics engine to a vista window, using a dynamic DLL load so it doesn't cause any problems on XP. I also managed to create a situation where the window is entirely transparent and impossible to click, which was an absolute bitch to get working. For future reference, if you ever want a window that is click-through, use CreateWindowEx with both WS_EX_TRANSPARENT and WS_EX_LAYERED. Using only one will not work, nor will any sort of windows message handling. That is the ONLY WAY to make the window click-through. Consequently to do opacity based hittesting, you have to hook the mouse events for the entire desktop and manually notify your window after doing the calculations yourself (something I don't even want to think about).
Pics!
There are other options and you don't get to see the subtle adjustments I make to whether or not a window is draggable, or if its clickthrough etc, but it gives you a rough idea.
Now, on to stencil buffers and masking and maybe i'll release this thing only a week late! FFFFFFFFFFFFFFFFFFF-
Pics!
There are other options and you don't get to see the subtle adjustments I make to whether or not a window is draggable, or if its clickthrough etc, but it gives you a rough idea.
Now, on to stencil buffers and masking and maybe i'll release this thing only a week late! FFFFFFFFFFFFFFFFFFF-
July 13, 2010
Function Pointer Speed
So after a lot of misguided profiling where I ended up just testing the stupid CPU cache and its ability to fucking predict what my code is going to do, I have, for the most part, demonstrated the following:
if(!(rand()%2d)) footest.nothing();
else footest.nothing2();
is slightly faster then
(footest.*funcptr[rand()%2])();
where funcptr is an array of the possible function calls. I had suspected this after I looked at the assembly, and a basic function pointer call like that takes around 11 instructions whereas a normal function call takes a single instruction.
In debug mode, however, if you have more then 2 possibilities, a switch statement's very existence takes up almost as many instructions as a single function pointer call, so the function pointer array technique is significantly faster with 3 or more possibilities. However, in release mode, if you write something like switch(rand()%3) and then just the function calls, the whole damn thing gets its own super special optimization that reduces it to about 3 instructions and hence makes the switch statement method slightly faster.
In all of these cases though the speed difference for 1000 calls is about 0.007 milliseconds and varies wildly. The CPU is doing so much architectural optimization that it most likely doesn't really matter which method is used. I do find it interesting that the switch statement gets super-optimized in certain situations, though.
if(!(rand()%2d)) footest.nothing();
else footest.nothing2();
is slightly faster then
(footest.*funcptr[rand()%2])();
where funcptr is an array of the possible function calls. I had suspected this after I looked at the assembly, and a basic function pointer call like that takes around 11 instructions whereas a normal function call takes a single instruction.
In debug mode, however, if you have more then 2 possibilities, a switch statement's very existence takes up almost as many instructions as a single function pointer call, so the function pointer array technique is significantly faster with 3 or more possibilities. However, in release mode, if you write something like switch(rand()%3) and then just the function calls, the whole damn thing gets its own super special optimization that reduces it to about 3 instructions and hence makes the switch statement method slightly faster.
In all of these cases though the speed difference for 1000 calls is about 0.007 milliseconds and varies wildly. The CPU is doing so much architectural optimization that it most likely doesn't really matter which method is used. I do find it interesting that the switch statement gets super-optimized in certain situations, though.
May 8, 2010
Most Bizarre Error Ever
Ok probably not the most bizarre error ever but it's definitely the weirdest for me.
My graphics engine has a Debug, a Release, and a special Release STD version that's compatible with CLI function requirements and other dependencies. These are organized as 3 separate configurations in my solution for compiling. Pretty normal stuff.
My example applications are all set to be dependent on the graphics engine project, which means visual studio automatically compiles the proper lib file into the project.
Well, normally.
My graphics engine examples suddenly and inexplicably stopped working in Release mode, but not Debug mode. While this normally signals an uninitialized variable problem, I had only flipped a few negative signs since the last build, so it was completely impossible for that to be the cause. I was terribly confused so I went into the code and discovered that it was failing because the singleton instance of the engine was null.
Now, if you know what a singleton is, you should know that this is absolutely, completely impossible. Under normal conditions, that is. At first I thought my engine instance assignment had been fucked, but that was working fine. In fact, the engine existed for one call, and then didn't exist for the other.
Then I checked where the calls were coming from. What I discovered next blew my mind.
One call was from Planeshader.dll; The other call was from Planeshader_std.dll - oh crap.
Somehow, visual studio had managed to link my executable to both instances of my graphics engine at the same time. I'm not entirely sure how it managed that feat since compiling two identical lib files creates thousands of collisions, but it appears that half the function calls were being sent to one dll, and half the function calls being sent to the other. My engine was trying to run in two dlls simultaneously.
I solved the problem simply by explicitly specifying the lib file in the project properties.
Surely my skill at breaking things knows no bounds.
My graphics engine has a Debug, a Release, and a special Release STD version that's compatible with CLI function requirements and other dependencies. These are organized as 3 separate configurations in my solution for compiling. Pretty normal stuff.
My example applications are all set to be dependent on the graphics engine project, which means visual studio automatically compiles the proper lib file into the project.
Well, normally.
My graphics engine examples suddenly and inexplicably stopped working in Release mode, but not Debug mode. While this normally signals an uninitialized variable problem, I had only flipped a few negative signs since the last build, so it was completely impossible for that to be the cause. I was terribly confused so I went into the code and discovered that it was failing because the singleton instance of the engine was null.
Now, if you know what a singleton is, you should know that this is absolutely, completely impossible. Under normal conditions, that is. At first I thought my engine instance assignment had been fucked, but that was working fine. In fact, the engine existed for one call, and then didn't exist for the other.
Then I checked where the calls were coming from. What I discovered next blew my mind.
One call was from Planeshader.dll; The other call was from Planeshader_std.dll - oh crap.
Somehow, visual studio had managed to link my executable to both instances of my graphics engine at the same time. I'm not entirely sure how it managed that feat since compiling two identical lib files creates thousands of collisions, but it appears that half the function calls were being sent to one dll, and half the function calls being sent to the other. My engine was trying to run in two dlls simultaneously.
I solved the problem simply by explicitly specifying the lib file in the project properties.
Surely my skill at breaking things knows no bounds.
April 10, 2010
Multithreading
This was a mistake. I have learned several things about multithreading ever since I started attempting to restructure my engine around it. Recently I have just discovered that everything I did with multithreading was wrong. Lockless threading was invented for being applied to realtime applications. In addition, Windows has a serious issue with this kind of stuff, and so the best library out there only works on Mac and Linux. All the windows lockless thread are written in C# because they love the garbage collector.
Thus, even beginning to start with implementing lockless stuff would require me to multithread my memory management, my utilities, my graphics engine, my audio engine, and rewrite pretty much every single line of code I've written over the past 4 years. And then I'd have to test all of it.
So instead of making everything threaded, I have ended up having to remove all threads completely. I know that somewhere, somehow, a bunch of programmers are going to laugh at my poor, single-threaded game, but unless I had like 6 other programmers to help me out here, I have no choice. I've learned a lot about priorities, and I know not to do stupid things simply for the sake of being modern. Luckily, this is a 2D graphics engine, so the vast majority of the stress put on the CPU is the physics. Because of this there may be a way for me to evaluate physics while I'm rendering the graphics and doing game logic. If I can just get the physics into another thread, even if its a stop-and-go kind of thing, it would give me all the performance bonuses I need from the second CPU core without needing super-duper complex lockless architectures and god knows what else. It may also be possible to multithread the network update packets by taking advantage of the update function in the cReal objects and forcing them to wait until they are about to sync all their physics data to actually update the information. By carefully structuring the update functions, it is possible to achieve fairly high performance without building an entire goddamn library.
Note to self: Update each physics step with a delta value from the graphics engine.
Thus, even beginning to start with implementing lockless stuff would require me to multithread my memory management, my utilities, my graphics engine, my audio engine, and rewrite pretty much every single line of code I've written over the past 4 years. And then I'd have to test all of it.
So instead of making everything threaded, I have ended up having to remove all threads completely. I know that somewhere, somehow, a bunch of programmers are going to laugh at my poor, single-threaded game, but unless I had like 6 other programmers to help me out here, I have no choice. I've learned a lot about priorities, and I know not to do stupid things simply for the sake of being modern. Luckily, this is a 2D graphics engine, so the vast majority of the stress put on the CPU is the physics. Because of this there may be a way for me to evaluate physics while I'm rendering the graphics and doing game logic. If I can just get the physics into another thread, even if its a stop-and-go kind of thing, it would give me all the performance bonuses I need from the second CPU core without needing super-duper complex lockless architectures and god knows what else. It may also be possible to multithread the network update packets by taking advantage of the update function in the cReal objects and forcing them to wait until they are about to sync all their physics data to actually update the information. By carefully structuring the update functions, it is possible to achieve fairly high performance without building an entire goddamn library.
Note to self: Update each physics step with a delta value from the graphics engine.
March 24, 2010
Updated To-do-list
- Fix cVectorUnique
- Build basic editor
○ Integrate TinyXML map parser
○ Implement image selection
○ Build image editing GUI
○ Build image editing mouse functions
○ Put in XML modification and saving
○ Modify imagesplitter to take command lines
○ Auto-image-splitting
○ Test for stability
- Now C# interop
○ Engine dependencies first
○ Use Interface technique to bypass multiple inheritance incompatabilities
○ cImage dependency tree
○ Ignore GUI
○ Test for stability
- Build kd-tree implementation.
○ Each layer swaps between x and y and has a value that specifies where to make the split.
○ A balance integer is required so the tree can keep itself roughly balanced, but this should be athreshold difference of about 5 or so to prevent a single object from constantly resizing the tree.
○ Each layer holds a list of renderables of that depth. A renderable's total radius must fit entirely inside the branch in question, or if it has no rotation, do a simple bounds check.
○ Most of the work is done when a renderable is inserted, because this is when the bounds checks get made
○ When a renderable moves, it just checks its nearby nodes for a needed crossover.
○ When a renderable changes dimension, it may need to be moved to a higher level.
§ Note that you may be able to combine the scale and movement checks into a single bounds check
- Remove the radial check from anything using the kd-tree, but cImageZ doesn't use the kd-tree so keep the radial check for that one.
- Because use of the kd-tree is optional, you need a way to standardize its use. Some kind of function somewhere saying "Add to render queue" or "add to kd-tree" or something.
- Swap the render buffer to use an additive memory allocator
○ You will need to create a separate render buffer to maintain a list of renderables that don't use the kd-tree.
○ To do this properly you'll want to create a tree merge function with a mass allocation, which will allow for the transfer of memory in one batch per-frame, which should be crazy fast.
○ Do this first, then start adding in stuff from the kd-tree.
- Do stress testing on the kd-tree
○ Make line renders for the tree (that'll be a lot of fun to watch)
○ Ensure optimal performance on low and high density images.
§ Also check the resulting performance hit on a single image
- Implement multiple passes
○ Abstract out all rendering code
○ Move camera attachment to pass
○ Implement placeholder rendering order functionality
- Go back to implementing the lighting system
○ Ensure blending functions correctly
○ Implement shadows (No penumbra! but still use the same circle calculation so your getting the correct umbra)
○ Calculate soft shadow points
○ Build soft shadow triangles
○ put in option to have object either be affected by light or not (Do it as a flag)
○ Put in coronas
○ put in option to have light cast shadows instead of be occluded. This is used for things like the sun, where color is ignored.
○ Optimize
○ Implement arc culling
○ ensure rect culling is working
○ Ensure backup textures allow it to work on the laptop
- Ensure lighting system works well
- Build example of lighting system
- Write up whitepaper for the lighting system using TeX
- Publish via gamedev.net and test reaction
- Implement new text renderer
- Now get back to linux server stuff
○ Transfer postgresql to mysql
○ Build testapp that utilizes that
○ Test on sea's linux box
○ Once stable, build the interface for NAT punchthrough and the game list.
○ Throw it on the webserver and pray to god it works
- Now go back to networked chat
○ Ensure connectivity tests work over a wide range, over internet, on LAN, through firewalls, etc.
○ Ensure chat functions properly
- Integrate lowest level physics
- Network lowest level physics
- Stabilize box2D hack
○ Set up a massive, horrifically nasty stress test for that.
○ Build low level network interpolation equation by defining the required detail level in terms of distance to object between this frame and then next frame.
§ I.E. if our object is moving this fast, it must have highest detail physics information about any object that it could hit by the next network ping, whatever that is.
*skip to main to-do-list*
- Get Brick replica to work
- Define 4 levels of physics serialization
- Finish writing box2D networking interpolation hack
- process packets and implement interpolation on a simple level
- Network physics
- throw bricks
- Implement destructables
- Implement a physics callback system
- Use this to implement impact damage based on relative physics formulas
- Sync destructables using RPC calls
- Put in health bars, network names, and other information
- Sync all this, including rudimentary score information as held by the server
- Build a functional basic shape editor
- Implement protocol buffers
- allow testbed activation on editor using in-game logic
- Build weapon system core
- Implement inventory
- Implement basic grappling gun
- Give GUI basic functionality (Weapon ammo tracking + health, etc.)
- Build options window and ensure most graphics options are functional
- Sync spawned weapon objects
- Build property-based weapon creation system
- Build weapons editor
- Design and implement weapon-centric distribution system
- Design weapon deadliness algorithm
- Implement weapon hashing and self-correcting danger network handling
- Implement anti-cheating weapon designs (weapon combination blacklist too)
- Differentiate between weapon restricted servers and open weapon servers
- Add LUA scripting core
- Integrate into weapons
- Extend weapons editor
- Implement complex object handling system
- Extend physics syncronization to handle complex objects
- Implement 2D nearest neighbor algorithm
- Test interpolation for complex object special cases
- Design complex object animation and syncronization schemes
- Extend weapons to allow for complex objects
- Extend editor to account for complex objects in generic cases
- Extend editor to handle basic animations for complex objects in generic cases
- Implement FX system
- Extend animation editor to handle animations for FX special cases
- Build specialized physics model for client-side FX.
- Integrate FX system into weapon subsystems and physics response system on a generic basis
- Make explosions
- Design hovering situation special-case for physics response system
- Apply this to giant hovering bases
- Ensure large physics object special-case in physics response system is stable
- Adapt 2D nearest neighbor algorithm for 2D lights
- Ensure lights act appropriately in indoor environments
- Implement powerups (including special-case physics response)
- Extend inventory to handle items on an abstract interactive basis
- Extend GUI into final mockup
- Implement unique kill registers for physics callbacks as dependent on weapon type/class/ID, as well as for specific event IDs
- Implement adaptive animation overloading system for complex avatars
- Ensure proper death animation as well as weapon swapping
- Abstract out the entire avatar into a class-system that must adapt for different body shapes.
- Implement class-specific statistics
- Create generic statistic trackers
- Build an interaction response system
- Combine interaction system with complex objects to create a generic vehicle class
- Convert base into a vehicle
- Build vehicles
- Implement vehicle spawn system and vehicle generic handling
- Build adaptive GUI system
- Create specialized vehicle GUI modifications
- Implement Map handling system
- Build map object spawn factory
- Network dynamic map changes
- Integrate LUA core into map scripting
- Compile list of basic map triggers
- Migrate objects over to map object handling
- Allow for multiple situational physics layers on base
- Get that stupid elevator to work
- Implement aircraft as a vehicle subset (this requires a physics response special case)
- Create Resource System
- Modify all spawned upgrades, powerups, vehicles and weapons to have generated resource costs.
- Implement drops
- Implement team resource counter as well as individual resource sharing systems
- Sync these over the network and apply anti-cheating subsystems
- Implement generic multiplayer statistic tracking over the client/server model
- Create the Lobby
- Add rooms
- Build server tracking system using the superserver
- Implement anti-cheating core on superserver and its authorization channels
- Ensure there are sufficient game creation options
- Test initial join and in-game join combinations
- Implement multiplayer statistic tracking over the entire superserver model and website (concept of a 'confirmed kill')
- Website integration
- Build basic editor
○ Integrate TinyXML map parser
○ Implement image selection
○ Build image editing GUI
○ Build image editing mouse functions
○ Put in XML modification and saving
○ Modify imagesplitter to take command lines
○ Auto-image-splitting
○ Test for stability
- Now C# interop
○ Engine dependencies first
○ Use Interface technique to bypass multiple inheritance incompatabilities
○ cImage dependency tree
○ Ignore GUI
○ Test for stability
- Build kd-tree implementation.
○ Each layer swaps between x and y and has a value that specifies where to make the split.
○ A balance integer is required so the tree can keep itself roughly balanced, but this should be athreshold difference of about 5 or so to prevent a single object from constantly resizing the tree.
○ Each layer holds a list of renderables of that depth. A renderable's total radius must fit entirely inside the branch in question, or if it has no rotation, do a simple bounds check.
○ Most of the work is done when a renderable is inserted, because this is when the bounds checks get made
○ When a renderable moves, it just checks its nearby nodes for a needed crossover.
○ When a renderable changes dimension, it may need to be moved to a higher level.
§ Note that you may be able to combine the scale and movement checks into a single bounds check
- Remove the radial check from anything using the kd-tree, but cImageZ doesn't use the kd-tree so keep the radial check for that one.
- Because use of the kd-tree is optional, you need a way to standardize its use. Some kind of function somewhere saying "Add to render queue" or "add to kd-tree" or something.
- Swap the render buffer to use an additive memory allocator
○ You will need to create a separate render buffer to maintain a list of renderables that don't use the kd-tree.
○ To do this properly you'll want to create a tree merge function with a mass allocation, which will allow for the transfer of memory in one batch per-frame, which should be crazy fast.
○ Do this first, then start adding in stuff from the kd-tree.
- Do stress testing on the kd-tree
○ Make line renders for the tree (that'll be a lot of fun to watch)
○ Ensure optimal performance on low and high density images.
§ Also check the resulting performance hit on a single image
- Implement multiple passes
○ Abstract out all rendering code
○ Move camera attachment to pass
○ Implement placeholder rendering order functionality
- Go back to implementing the lighting system
○ Ensure blending functions correctly
○ Implement shadows (No penumbra! but still use the same circle calculation so your getting the correct umbra)
○ Calculate soft shadow points
○ Build soft shadow triangles
○ put in option to have object either be affected by light or not (Do it as a flag)
○ Put in coronas
○ put in option to have light cast shadows instead of be occluded. This is used for things like the sun, where color is ignored.
○ Optimize
○ Implement arc culling
○ ensure rect culling is working
○ Ensure backup textures allow it to work on the laptop
- Ensure lighting system works well
- Build example of lighting system
- Write up whitepaper for the lighting system using TeX
- Publish via gamedev.net and test reaction
- Implement new text renderer
- Now get back to linux server stuff
○ Transfer postgresql to mysql
○ Build testapp that utilizes that
○ Test on sea's linux box
○ Once stable, build the interface for NAT punchthrough and the game list.
○ Throw it on the webserver and pray to god it works
- Now go back to networked chat
○ Ensure connectivity tests work over a wide range, over internet, on LAN, through firewalls, etc.
○ Ensure chat functions properly
- Integrate lowest level physics
- Network lowest level physics
- Stabilize box2D hack
○ Set up a massive, horrifically nasty stress test for that.
○ Build low level network interpolation equation by defining the required detail level in terms of distance to object between this frame and then next frame.
§ I.E. if our object is moving this fast, it must have highest detail physics information about any object that it could hit by the next network ping, whatever that is.
*skip to main to-do-list*
- Get Brick replica to work
- Define 4 levels of physics serialization
- Finish writing box2D networking interpolation hack
- process packets and implement interpolation on a simple level
- Network physics
- throw bricks
- Implement destructables
- Implement a physics callback system
- Use this to implement impact damage based on relative physics formulas
- Sync destructables using RPC calls
- Put in health bars, network names, and other information
- Sync all this, including rudimentary score information as held by the server
- Build a functional basic shape editor
- Implement protocol buffers
- allow testbed activation on editor using in-game logic
- Build weapon system core
- Implement inventory
- Implement basic grappling gun
- Give GUI basic functionality (Weapon ammo tracking + health, etc.)
- Build options window and ensure most graphics options are functional
- Sync spawned weapon objects
- Build property-based weapon creation system
- Build weapons editor
- Design and implement weapon-centric distribution system
- Design weapon deadliness algorithm
- Implement weapon hashing and self-correcting danger network handling
- Implement anti-cheating weapon designs (weapon combination blacklist too)
- Differentiate between weapon restricted servers and open weapon servers
- Add LUA scripting core
- Integrate into weapons
- Extend weapons editor
- Implement complex object handling system
- Extend physics syncronization to handle complex objects
- Implement 2D nearest neighbor algorithm
- Test interpolation for complex object special cases
- Design complex object animation and syncronization schemes
- Extend weapons to allow for complex objects
- Extend editor to account for complex objects in generic cases
- Extend editor to handle basic animations for complex objects in generic cases
- Implement FX system
- Extend animation editor to handle animations for FX special cases
- Build specialized physics model for client-side FX.
- Integrate FX system into weapon subsystems and physics response system on a generic basis
- Make explosions
- Design hovering situation special-case for physics response system
- Apply this to giant hovering bases
- Ensure large physics object special-case in physics response system is stable
- Adapt 2D nearest neighbor algorithm for 2D lights
- Ensure lights act appropriately in indoor environments
- Implement powerups (including special-case physics response)
- Extend inventory to handle items on an abstract interactive basis
- Extend GUI into final mockup
- Implement unique kill registers for physics callbacks as dependent on weapon type/class/ID, as well as for specific event IDs
- Implement adaptive animation overloading system for complex avatars
- Ensure proper death animation as well as weapon swapping
- Abstract out the entire avatar into a class-system that must adapt for different body shapes.
- Implement class-specific statistics
- Create generic statistic trackers
- Build an interaction response system
- Combine interaction system with complex objects to create a generic vehicle class
- Convert base into a vehicle
- Build vehicles
- Implement vehicle spawn system and vehicle generic handling
- Build adaptive GUI system
- Create specialized vehicle GUI modifications
- Implement Map handling system
- Build map object spawn factory
- Network dynamic map changes
- Integrate LUA core into map scripting
- Compile list of basic map triggers
- Migrate objects over to map object handling
- Allow for multiple situational physics layers on base
- Get that stupid elevator to work
- Implement aircraft as a vehicle subset (this requires a physics response special case)
- Create Resource System
- Modify all spawned upgrades, powerups, vehicles and weapons to have generated resource costs.
- Implement drops
- Implement team resource counter as well as individual resource sharing systems
- Sync these over the network and apply anti-cheating subsystems
- Implement generic multiplayer statistic tracking over the client/server model
- Create the Lobby
- Add rooms
- Build server tracking system using the superserver
- Implement anti-cheating core on superserver and its authorization channels
- Ensure there are sufficient game creation options
- Test initial join and in-game join combinations
- Implement multiplayer statistic tracking over the entire superserver model and website (concept of a 'confirmed kill')
- Website integration
March 14, 2010
Texas Fucks Everyone
Texas conservatives screw history
So now the United States isn't a democratic nation anymore because someone didn't like it. I guess Texans aren't worthy of being called human beings anymore because I don't like them. I guess Fred Meyer shouldn't let conservatives into the store because they're a public nuisance. Why don't we just send all the kids into fucking gas chambers and speed up the process?! How about we just re-name America to "The United States of Idiocy". If anyone so much as says one word in support of Texas, I will BURN YOU ALIVE. I have HAD IT with people amusingly demanding that I be more accepting of other political views.
Oh, I need to be more accepting of other political views. SURE. I have a better idea, why don't you all go fuck yourselves because I don't care. So far I have met about 150 people who have qualified themselves as decent human beings. About 40 of them don't have self-esteem issues.
Society is destroying the only human beings worthy of being called "human." People who are sensible. People who listen to opposing evidence. People who are actually attempting to do something useful. People who understand what is important, people who know that its our intelligence, our creativity, and our dreams that are worth fighting for. People who don't understand why anyone would consider them worthy of such praise without realizing that it is this very trait that puts them far above the average. It's people who know they have to earn their life.
Texas Board of Education cuts Thomas Jefferson out of its textbooks.
The Texas Board of Education has been meeting this week to revise its social studies curriculum. During the past three days, the boards far-right faction wielded their power to shape lessons on the civil rights movement, the U.S. free enterprise system and hundreds of other topics:
To avoid exposing students to transvestites, transsexuals and who knows what else, the Board struck the curriculums reference to sex and gender as social constructs.
The Board removed Thomas Jefferson from the Texas curriculum, replacing him with religious right icon John Calvin.
The Board refused to require that students learn that the Constitution prevents the U.S. government from promoting one religion over all others.
The Board struck the word democratic from the description of the U.S. government, instead terming it a constitutional republic.
So now the United States isn't a democratic nation anymore because someone didn't like it. I guess Texans aren't worthy of being called human beings anymore because I don't like them. I guess Fred Meyer shouldn't let conservatives into the store because they're a public nuisance. Why don't we just send all the kids into fucking gas chambers and speed up the process?! How about we just re-name America to "The United States of Idiocy". If anyone so much as says one word in support of Texas, I will BURN YOU ALIVE. I have HAD IT with people amusingly demanding that I be more accepting of other political views.
Oh, I need to be more accepting of other political views. SURE. I have a better idea, why don't you all go fuck yourselves because I don't care. So far I have met about 150 people who have qualified themselves as decent human beings. About 40 of them don't have self-esteem issues.
Society is destroying the only human beings worthy of being called "human." People who are sensible. People who listen to opposing evidence. People who are actually attempting to do something useful. People who understand what is important, people who know that its our intelligence, our creativity, and our dreams that are worth fighting for. People who don't understand why anyone would consider them worthy of such praise without realizing that it is this very trait that puts them far above the average. It's people who know they have to earn their life.
March 9, 2010
Fucking HDR
I really, really have to avoid thinking about HDR, because once I start working on one thing, I become completely and utterly obsessed with it. I'm talking an obsession more extreme then my love of bunnies. I'm talking an obsession that would have kept me up all night if I had let it. I've tried approximately 240 different equations in my head and none of them work. The approaches don't work. Nothing seems to work other then a few key things:
1. Bloom is subtle
2. Bloom is only visible under high contrast environments
Problem: How the fuck does one determine what exactly high contrast is?
3. IRL bloom behavior is a certain threshold below the brightness in question, and then anything getting too close to the original HDR brightness is completely eliminated.
4. It almost seems like you could just subtract the original source from the bloom, then somehow add this back into the calculation and do the HDR threshold calculation. This doesn't make sense mathematically though.
5. The IRL bloom tends to extend extremely far, but at the very edges its incredibly faint and just barely noticeable. This is stupidly hard to implement in a graphics pipeline because when you get things that faint they tend to just drop out completely instead of being subtle. if you try to use the inverse square law, squaring the values just removes the lower ones completely. I have been unable to find (for LDR anyway) an equation that preserves the subtle low values while throttling the high values. It might be that working with values of 0 to 1 is flawed and I though instead add 1 so that the lowest possible value is 1, thereby enabling division that makes sense. I implemented a version of this that did a really nice job of throttling the HDR values, but it still didn't seem to work very well for bloom.
6. I'm fairly confidant the key here lies in operating under HDR conditions by throwing out the assumptions normal bloom shaders use that are often used just to compensate for woefully inadequate spectrum analysis.
7. One thing that's really annoying is that I'm not entirely sure what I'm looking for each time I attempt to implement this. Furthermore most HDR pictures are corrupted by natural bloom which fucks with my own bloom if I try to use them. For an ideal testing scenario a full HDR pipeline must be implemented with proper HDR spectrum analysis for any of this to be effective. Attempting to put proper bloom on a traditional HDR crush algorithm will look stupid because the image has already been irreparably destroyed by the flawed HDR processing.
8. Note that the hilariously bad HDR example in the directX SDK has a natural bloom technique that, when not being grossly overused, is actually reasonably accurate in that it is circular and has a linear falloff. Ideally the falloff would be more exponential but that's proven to be stupendously difficult, so a subtler version of this would work really nicely.
9. hidden
I need to shut down my train of thought here before it overrides everything else (it's finals week for crying out loud), but hopefully the next time I become obsessed with this, it will be next year, about this time, when I have the proper environment to work in. When that happens I need to do some research on photographic HDR analysis, since I'm pretty sure they've figured out equations for a lot of the spectrum analysis.
1. Bloom is subtle
2. Bloom is only visible under high contrast environments
Problem: How the fuck does one determine what exactly high contrast is?
3. IRL bloom behavior is a certain threshold below the brightness in question, and then anything getting too close to the original HDR brightness is completely eliminated.
4. It almost seems like you could just subtract the original source from the bloom, then somehow add this back into the calculation and do the HDR threshold calculation. This doesn't make sense mathematically though.
5. The IRL bloom tends to extend extremely far, but at the very edges its incredibly faint and just barely noticeable. This is stupidly hard to implement in a graphics pipeline because when you get things that faint they tend to just drop out completely instead of being subtle. if you try to use the inverse square law, squaring the values just removes the lower ones completely. I have been unable to find (for LDR anyway) an equation that preserves the subtle low values while throttling the high values. It might be that working with values of 0 to 1 is flawed and I though instead add 1 so that the lowest possible value is 1, thereby enabling division that makes sense. I implemented a version of this that did a really nice job of throttling the HDR values, but it still didn't seem to work very well for bloom.
6. I'm fairly confidant the key here lies in operating under HDR conditions by throwing out the assumptions normal bloom shaders use that are often used just to compensate for woefully inadequate spectrum analysis.
7. One thing that's really annoying is that I'm not entirely sure what I'm looking for each time I attempt to implement this. Furthermore most HDR pictures are corrupted by natural bloom which fucks with my own bloom if I try to use them. For an ideal testing scenario a full HDR pipeline must be implemented with proper HDR spectrum analysis for any of this to be effective. Attempting to put proper bloom on a traditional HDR crush algorithm will look stupid because the image has already been irreparably destroyed by the flawed HDR processing.
8. Note that the hilariously bad HDR example in the directX SDK has a natural bloom technique that, when not being grossly overused, is actually reasonably accurate in that it is circular and has a linear falloff. Ideally the falloff would be more exponential but that's proven to be stupendously difficult, so a subtler version of this would work really nicely.
9. hidden
I need to shut down my train of thought here before it overrides everything else (it's finals week for crying out loud), but hopefully the next time I become obsessed with this, it will be next year, about this time, when I have the proper environment to work in. When that happens I need to do some research on photographic HDR analysis, since I'm pretty sure they've figured out equations for a lot of the spectrum analysis.
February 12, 2010
Holy crap
I must have accidentally imbued my to-do-list with rabbits or something, because it just gets bigger and bigger.
To-Do
- _bucketsort cRBT_list isn't working because the getnear function doesn't differentiate between a "before" and an "after."
○ Implement checking for this so it can fail appropariately
○ Figure out if it needs to be a before or an after for the actual getnear call in the alloc function
- Do stress tests on the memory allocator and make sure it doesn't fail under any circumstances
- Do a test on the string pool allocator being inside the DLL and try to figure out why the hell the allocator gets initialized like 6 times.
- Finalize the string pool for usage
- Finish the last bit of the string table
- Build kd-tree implementation.
○ Each layer swaps between x and y and has a value that specifies where to make the split.
○ A balance integer is required so the tree can keep itself roughly balanced, but this should be athreshold difference of about 5 or so to prevent a single object from constantly resizing the tree.
○ Each layer holds a list of renderables of that depth. A renderable's total radius must fit entirely inside the branch in question, or if it has no rotation, do a simple bounds check.
○ Most of the work is done when a renderable is inserted, because this is when the bounds checks get made
○ When a renderable moves, it just checks its nearby nodes for a needed crossover.
○ When a renderable changes dimension, it may need to be moved to a higher level.
§ Note that you may be able to combine the scale and movement checks into a single bounds check
- Remove the radial check from anything using the kd-tree, but cImageZ doesn't use the kd-tree so keep the radial check for that one.
- Because use of the kd-tree is optional, you need a way to standardize its use. Some kind of function somewhere saying "Add to render queue" or "add to kd-tree" or something.
- Swap the render buffer to use an additive memory allocator
○ You will need to create a separate render buffer to maintain a list of renderables that don't use the kd-tree.
○ To do this properly you'll want to create a tree merge function with a mass allocation, which will allow for the transfer of memory in one batch per-frame, which should be crazy fast.
○ Do this first, then start adding in stuff from the kd-tree.
- Do stress testing on the kd-tree
○ Make line renders for the tree (that'll be a lot of fun to watch)
○ Ensure optimal performance on low and high density images.
§ Also check the resulting performance hit on a single image
Implement front-to-back transparency blending
- First, undo all that stuff you just did.
- The first thing that needs to be working is the multi-texture technique.
- Then, you need to make sure the blending is there, and that the backbuffer has an alpha (Should be done already)
- At the end of the render, reset the alpha channel to opaque
- Now you should be able to see the results. Ensure the blending functions properly under high tranparency complexities and adjust it as necessary
- Once the blending is working perfectly, enable the stencil buffer and do a write when the pixel is opaque (or in the case of this algorithm, completely transparent.
- Now extend this to multi-texture scenarios
- Then, ensure that it still works with the lighting technique.
○ Later on you can figure out if the alternate lighting method is still viable
- Done properly, the normalmaps should automatically use this technique as well. You may need to tweak that a bit though
- Go back to implementing the lighting system
○ Ensure blending functions correctly
○ Implement shadows (No penumbra! but still use the same circle calculation so your getting the correct umbra)
○ Calculate soft shadow points
○ Build soft shadow triangles
○ put in option to have object either be affected by light or not (Do it as a flag)
○ Put in coronas
○ put in option to have light cast shadows instead of be occluded. This is used for things like the sun, where color is ignored.
○ Optimize
○ Implement arc culling
○ ensure rect culling is working
○ Ensure backup textures allow it to work on the laptop
- Implement new text renderer
- Do C# interop
To-Do
- _bucketsort cRBT_list isn't working because the getnear function doesn't differentiate between a "before" and an "after."
○ Implement checking for this so it can fail appropariately
○ Figure out if it needs to be a before or an after for the actual getnear call in the alloc function
- Do stress tests on the memory allocator and make sure it doesn't fail under any circumstances
- Do a test on the string pool allocator being inside the DLL and try to figure out why the hell the allocator gets initialized like 6 times.
- Finalize the string pool for usage
- Finish the last bit of the string table
- Build kd-tree implementation.
○ Each layer swaps between x and y and has a value that specifies where to make the split.
○ A balance integer is required so the tree can keep itself roughly balanced, but this should be athreshold difference of about 5 or so to prevent a single object from constantly resizing the tree.
○ Each layer holds a list of renderables of that depth. A renderable's total radius must fit entirely inside the branch in question, or if it has no rotation, do a simple bounds check.
○ Most of the work is done when a renderable is inserted, because this is when the bounds checks get made
○ When a renderable moves, it just checks its nearby nodes for a needed crossover.
○ When a renderable changes dimension, it may need to be moved to a higher level.
§ Note that you may be able to combine the scale and movement checks into a single bounds check
- Remove the radial check from anything using the kd-tree, but cImageZ doesn't use the kd-tree so keep the radial check for that one.
- Because use of the kd-tree is optional, you need a way to standardize its use. Some kind of function somewhere saying "Add to render queue" or "add to kd-tree" or something.
- Swap the render buffer to use an additive memory allocator
○ You will need to create a separate render buffer to maintain a list of renderables that don't use the kd-tree.
○ To do this properly you'll want to create a tree merge function with a mass allocation, which will allow for the transfer of memory in one batch per-frame, which should be crazy fast.
○ Do this first, then start adding in stuff from the kd-tree.
- Do stress testing on the kd-tree
○ Make line renders for the tree (that'll be a lot of fun to watch)
○ Ensure optimal performance on low and high density images.
§ Also check the resulting performance hit on a single image
Implement front-to-back transparency blending
- First, undo all that stuff you just did.
- The first thing that needs to be working is the multi-texture technique.
- Then, you need to make sure the blending is there, and that the backbuffer has an alpha (Should be done already)
- At the end of the render, reset the alpha channel to opaque
- Now you should be able to see the results. Ensure the blending functions properly under high tranparency complexities and adjust it as necessary
- Once the blending is working perfectly, enable the stencil buffer and do a write when the pixel is opaque (or in the case of this algorithm, completely transparent.
- Now extend this to multi-texture scenarios
- Then, ensure that it still works with the lighting technique.
○ Later on you can figure out if the alternate lighting method is still viable
- Done properly, the normalmaps should automatically use this technique as well. You may need to tweak that a bit though
- Go back to implementing the lighting system
○ Ensure blending functions correctly
○ Implement shadows (No penumbra! but still use the same circle calculation so your getting the correct umbra)
○ Calculate soft shadow points
○ Build soft shadow triangles
○ put in option to have object either be affected by light or not (Do it as a flag)
○ Put in coronas
○ put in option to have light cast shadows instead of be occluded. This is used for things like the sun, where color is ignored.
○ Optimize
○ Implement arc culling
○ ensure rect culling is working
○ Ensure backup textures allow it to work on the laptop
- Implement new text renderer
- Do C# interop
February 9, 2010
No More ECON
Fuck economics. I studied dutifully for that test and almost everyone in the class was still confused over badly written, misleading questions. In math, either you are right or you are wrong. In philosophy, either you make a convincing argument, or you don't. In economics, either you are wrong, or you are very, very wrong. I swear to god there is no "right" in economics, there are only varying degrees of wrongness and you must strive to be as not wrong as possible in order to pass the stupid class. Either that or you need to be telepathic.
I want my fucking programming classes, damn it.
I want my fucking programming classes, damn it.
January 30, 2010
Floating Point Preformance
Whenever I do intensive performance testing on delicate math operations, the results almost always surprise me. Today I have learned that if a program preforms a divide-by-zero on a floating point operation (which does not blow up the program), the resulting performance hit is almost equivalent to taking a square root.
January 16, 2010
Volumetric Rendering in Realtime
1. Do a depth render with additive blending, counterclockwise polygon culling
2. Do a depth render with additive blending, clockwise polygon culling.
3. Subtract 1 from 2 (or 2 from 1, whichever works), and you get precisely how much that pixel is "inside" the volumetric in question.
2. Do a depth render with additive blending, clockwise polygon culling.
3. Subtract 1 from 2 (or 2 from 1, whichever works), and you get precisely how much that pixel is "inside" the volumetric in question.
January 7, 2010
Watch This Now
The following movie was made, in its entirety, by a single person. It is entirely CGI, and if you don't believe me, check out the compositing breakdown.
The Third & The Seventh from Alex Roman on Vimeo.
January 2, 2010
Extraordinary
Today I woke up after having the most amazing dream ever. What I've tried to do here is extract the essence of the dream's plot ideas and work them into a concrete story. Note that the story would be used in a playable game that functions a lot like Myst, although with real-time interactions and time-based events. I've so far recognized elements that have been subconsciously influenced from: Terminator 4, Myst, Myst 3, Jump Start 3rd Grade (I'm not kidding), Aeon Flux, Watchmen, The Westing Game, and others. I have used several other previous dreams as jumping off points for ideas as well. The song I'm currently listening to also sounds like the theme song.
Extraordinary
by Erik McClure
- Prologue -
ex·traor·di·nar·y (k-strôrdn-r, kstr-ôr-)
adj.
1. very unusual, remarkable, or surprising
2. not in an established manner, course, or order
3. employed for particular events or purposes
The House On Top Of The World. That was it's name. A magnificent mansion perched on the cliff-side of a mountain overlooking the sea. With windows of glass, over 20 levels, and technology more advanced then most people thought possible, it was a king's paradise. Five people currently lived there, the most advanced scientific minds of the 22nd Century. A sixth was to join them soon.
Brianna Vexhearth - Genome Splicer, Botanist and mechanical engineer.
Jack Nolung - Physicist, Architect, Mathematician and structural engineer.
Zachary Nchateul - Organic technology expert and Computer Programmer.
Vidyal Mistral - Quantum theorist, Mathematician, and Game Theorist.
Natasha Sabisten - Physicist, Quantum Gravity Theorist, Electrical Engineer, and Audio Engineer.
And the sixth; an AI/human Interaction Specialist and Interface Designer. They say they're building something up there. Something big. Perhaps Nick McCrath is the missing link.
Synopsis
The player is carried to a mansion on top of a mountain, inside which the entirety of the game exists. The entry level has a welcoming area and the guest services, the floor below that, along with the basement and sub-basement house the inner workings of the house (like a miniature fusion reactor). The floor above the entry level is reserved for entertainment, the second floor is designed for meetings and collaborative research efforts, then there's a garden and every 2 floors above that are dedicated to each member of The Five, with Brianna Vexhearth on the bottom and Jack Nolung (the designer of the house) on top. The floors above that are unknown.
Sub-basement - Fusion Maintenance and supporting infrastructure.
Basement - Fusion reactor, heating, boiler, electrical management, Supercomputer, etc.
Floor 0 - Self-sufficient cleaning apparatus (air scrubbers, water purifiers, etc.)
Lobby
1st Floor - Entertainment
2nd Floor - Collaborative Research
3rd Floor - Garden
4th Floor - Brianna Vexhearth Labs
5th Floor - Brianna Vexhearth Living Quarters
6th Floor - Zachary Nchateul Labs
7th Floor - Zachary Nchateul Living Quarters
8th Floor - Vidyal Mistral Labs
9th Floor - Vidyal Mistral Living Quarters
10th Floor - Natasha Sabisten Labs
11th Floor - Natasha Sabisten Living Quarters
12th Floor - Jack Nolung Labs
13th Floor - Jack Nolung Living Quarters
14th Floor - Classified
15th Floor - Classified
16th Floor - Classified
17th Floor - Classified
18th Floor - Classified
19th Floor - Classified
20th Floor - Classified
The fact that the house is on the side of a giant cliff makes for some interesting visuals. On several levels there are glass enclosures outside the main structure, and multiple catwalks which are, shall we say, interesting to walk across. There are obviously several elevators and various staircases. While the initial appearance of the house seems relatively normal, if quite nice looking, it has subtle instances of incredible technology. Holograms are common place, 3D interfaces, a supercomputer running the house, and automated systems all over the place. The house is capable of repairing itself and cleaning itself without human interference, and with the fusion reactor it is estimated that it could keep itself running perfectly fine for over 5 centuries if need be.
The player is the sixth member that is permitted to enter the house. As per protocol, they are given two weeks to explore the house and get acquainted with everything and everyone before moving in permanently. Of course, that's when things start to go wrong. Experiments start exploding (or rather, start exploding at a higher rate then normal), robots start to misbehave, and things escalate until one of The Five goes missing entirely. Naturally, the player is immediately under suspicion, but the player knows it couldn't be him, because he's trying to find whoever is sabotaging the systems. It becomes obvious that the player isn't the saboteur when they are almost killed and catch a glimpse of a shadowing figure wandering around the outside of the house.
The remaining scientists are convinced that someone must have caught a ride with the player and broke into the house. Sleeping is dangerous. Experiments could go wrong at any time. Tensions are high and fights are common. Then, Jack does the unthinkable - he blames the player. He says that he knows it's the player, even after he has saved numerous scientists from their own experiments gone haywire. Then, he explains what has happened.
The player is an expert in Human/AI interactions. Human interactions. What has in fact occurred is a conspiracy so ingenious that the player isn't even aware of it. He has set out a series of notes to himself such that he engineered almost all of the experiment failures except the ones he saved several scientists from. When he told that one scientist where to go, it was the wrong way that he remembered from his _notes_. Everything he did according to his notes in fact worked against them. He was an unknowing agent of destruction, perpetrated by himself, and then somehow forgotten using some kind of memory modification technique. This is suddenly proven true when the player realizes that the path he was taking was not, in fact, a shortcut like his notes claimed, but a path that would have kept him and only him out of harms way while leaving the rest of them to die.
The problem is that the player doesn't want to kill them, and now he has already set in motion a series of events that will overload the fusion reactor and now they must escape. The front door gets jammed shut and several pathways out are blocked to the point that they can't get far enough away from the blast radius. Then Jack says something about the restricted area. This is a bit confusing because the player had been given access to all the classified levels shortly after arriving and had helped in some of them.
"There's a 23rd floor."
And then the player gets to see something totally amazing and I haven't entirely decided what it is, but its at least 3 stories tall, since the floor numbers go from 19, to 20, to 23.
Extraordinary
by Erik McClure
- Prologue -
ex·traor·di·nar·y (k-strôrdn-r, kstr-ôr-)
adj.
1. very unusual, remarkable, or surprising
2. not in an established manner, course, or order
3. employed for particular events or purposes
The House On Top Of The World. That was it's name. A magnificent mansion perched on the cliff-side of a mountain overlooking the sea. With windows of glass, over 20 levels, and technology more advanced then most people thought possible, it was a king's paradise. Five people currently lived there, the most advanced scientific minds of the 22nd Century. A sixth was to join them soon.
Brianna Vexhearth - Genome Splicer, Botanist and mechanical engineer.
Jack Nolung - Physicist, Architect, Mathematician and structural engineer.
Zachary Nchateul - Organic technology expert and Computer Programmer.
Vidyal Mistral - Quantum theorist, Mathematician, and Game Theorist.
Natasha Sabisten - Physicist, Quantum Gravity Theorist, Electrical Engineer, and Audio Engineer.
And the sixth; an AI/human Interaction Specialist and Interface Designer. They say they're building something up there. Something big. Perhaps Nick McCrath is the missing link.
Synopsis
The player is carried to a mansion on top of a mountain, inside which the entirety of the game exists. The entry level has a welcoming area and the guest services, the floor below that, along with the basement and sub-basement house the inner workings of the house (like a miniature fusion reactor). The floor above the entry level is reserved for entertainment, the second floor is designed for meetings and collaborative research efforts, then there's a garden and every 2 floors above that are dedicated to each member of The Five, with Brianna Vexhearth on the bottom and Jack Nolung (the designer of the house) on top. The floors above that are unknown.
Sub-basement - Fusion Maintenance and supporting infrastructure.
Basement - Fusion reactor, heating, boiler, electrical management, Supercomputer, etc.
Floor 0 - Self-sufficient cleaning apparatus (air scrubbers, water purifiers, etc.)
Lobby
1st Floor - Entertainment
2nd Floor - Collaborative Research
3rd Floor - Garden
4th Floor - Brianna Vexhearth Labs
5th Floor - Brianna Vexhearth Living Quarters
6th Floor - Zachary Nchateul Labs
7th Floor - Zachary Nchateul Living Quarters
8th Floor - Vidyal Mistral Labs
9th Floor - Vidyal Mistral Living Quarters
10th Floor - Natasha Sabisten Labs
11th Floor - Natasha Sabisten Living Quarters
12th Floor - Jack Nolung Labs
13th Floor - Jack Nolung Living Quarters
14th Floor - Classified
15th Floor - Classified
16th Floor - Classified
17th Floor - Classified
18th Floor - Classified
19th Floor - Classified
20th Floor - Classified
The fact that the house is on the side of a giant cliff makes for some interesting visuals. On several levels there are glass enclosures outside the main structure, and multiple catwalks which are, shall we say, interesting to walk across. There are obviously several elevators and various staircases. While the initial appearance of the house seems relatively normal, if quite nice looking, it has subtle instances of incredible technology. Holograms are common place, 3D interfaces, a supercomputer running the house, and automated systems all over the place. The house is capable of repairing itself and cleaning itself without human interference, and with the fusion reactor it is estimated that it could keep itself running perfectly fine for over 5 centuries if need be.
The player is the sixth member that is permitted to enter the house. As per protocol, they are given two weeks to explore the house and get acquainted with everything and everyone before moving in permanently. Of course, that's when things start to go wrong. Experiments start exploding (or rather, start exploding at a higher rate then normal), robots start to misbehave, and things escalate until one of The Five goes missing entirely. Naturally, the player is immediately under suspicion, but the player knows it couldn't be him, because he's trying to find whoever is sabotaging the systems. It becomes obvious that the player isn't the saboteur when they are almost killed and catch a glimpse of a shadowing figure wandering around the outside of the house.
The remaining scientists are convinced that someone must have caught a ride with the player and broke into the house. Sleeping is dangerous. Experiments could go wrong at any time. Tensions are high and fights are common. Then, Jack does the unthinkable - he blames the player. He says that he knows it's the player, even after he has saved numerous scientists from their own experiments gone haywire. Then, he explains what has happened.
The player is an expert in Human/AI interactions. Human interactions. What has in fact occurred is a conspiracy so ingenious that the player isn't even aware of it. He has set out a series of notes to himself such that he engineered almost all of the experiment failures except the ones he saved several scientists from. When he told that one scientist where to go, it was the wrong way that he remembered from his _notes_. Everything he did according to his notes in fact worked against them. He was an unknowing agent of destruction, perpetrated by himself, and then somehow forgotten using some kind of memory modification technique. This is suddenly proven true when the player realizes that the path he was taking was not, in fact, a shortcut like his notes claimed, but a path that would have kept him and only him out of harms way while leaving the rest of them to die.
The problem is that the player doesn't want to kill them, and now he has already set in motion a series of events that will overload the fusion reactor and now they must escape. The front door gets jammed shut and several pathways out are blocked to the point that they can't get far enough away from the blast radius. Then Jack says something about the restricted area. This is a bit confusing because the player had been given access to all the classified levels shortly after arriving and had helped in some of them.
"There's a 23rd floor."
And then the player gets to see something totally amazing and I haven't entirely decided what it is, but its at least 3 stories tall, since the floor numbers go from 19, to 20, to 23.
Subscribe to:
Posts (Atom)