I graduate college in 3 months. I filed the paperwork and started a company almost a year ago, but after getting hammered by homework, it is sitting in dilapidation. For five years I've fought to try and make my dreams a reality. For five years, I've failed, miserably. I am more fortunate than most in that my parents are perfectly willing to let me stay with them throughout college, so failure is relatively painless for me.
I have failed at almost everything I have ever attempted to do. Be it a combination of foolish ambition or rash decisions, the only projects I ever actually completed were pathetically simplistic, and usually done for school. It started when I was 12, and tried to build a campaign for Age of Mythology, a real project I could call my own instead of simply tinkering and upgrading existing ideas. I made half of one mission and gave up.
When I was 13, I was introduced to Freelancer, and became very interested in building mods for it. I volunteered to help an attempt at recreating the Halo universe inside Freelancer. For almost 2 years I tried to help in various ways without having any real skills by organizing schedules and development plans. Then I got myself banned from very mod I'd worked on so hard, and had my dreams crushed by a lead developer who screamed at me for being a useless piece of shit that just annoyed the crap out of everyone for two years instead of doing anything useful.
A short while later, I teamed up with someone else who had left that same mod for unrelated reasons in an attempt to rebuild an open-source version of Freelancer that would be bigger and better in every way. At least this time I knew it was stupidly ambitious, but I was hoping that by creating it as a community effort that ambition could be met by the combined skills of many people. Obviously, this never worked. It never even got close to working. I had no idea what I was doing, learned the basics of C++ only a few months ago, and had no idea how to build a 3D game.
In the middle of this, I got my first programming job as a royalty-only, laughable attempt at making a spiritual successor to the Descent series. I was forced to work with an artist who was completely fucking insane and a manager who was either totally incompetent or hopelessly idealistic. A month after taking the job, the last remaining programmer quit, leaving me, a 15-year-old high school kid, as the "lead programmer" of a game when I had no idea what I was doing. I quit 2 months later and learned never to take royalty pay, ever.
After finally giving up on my stupid open-source freelancer project, I played Cave Story, and realized that instead of using someone else's graphics engine, I could build my own. If Cave Story can make a compelling game with such a basic graphics engine, surely I could too? Back then, the best open-source 2D graphics were SDL, which didn't do much more than draw things on the screen. I decided I would build a better, open-source engine in C#. Like everything else I had ever worked on, it was a disaster.
After rebuilding the engine in C++ and coming up with some rather inventive ideas for how to do unusual 2D graphics, I quickly realized that it should really be a proprietary engine. For the first time I began thinking about making a living through my own projects instead of working for some giant corporation. It was at this point I saw an incredible flash animation and sent some very bad fanart to a certain amazing person. I had a vision of something truly remarkable, something amazing, the missing link to my constant daydreams and fantasies and bizarre programming experiments. When I discovered that this person I was practically worshipping as an idol had similar ideas, I instantly knew that I had to find a way to make it happen. In the spring of 2008, everything in my entire life became focused on achieving one, singular goal - make that game idea into reality.
That summer, I landed an internship at Microsoft and spent 3 months working the only real job I've ever had. I hated it. The entire time I had been working on a much simpler 2D game idea, hoping it would serve as practice for what my cave-story imitation had morphed into - an epic multiplayer focused game that bore absolutely no resemblance to the original game idea. It was supposed to be my company's breakout title, a way to generate the funds needed to build my idol's game idea, and I'd build most of it during my last year of high school as my senior project.
The incredibly basic 2D game idea imploded after I realized I couldn't write my own physics engine. Instead, I started using Box2D, and set out trying to construct the epic multiplayer game. It was a catastrophic failure of massive proportions. By the end of the year all I had to show for my project was a stupid jeep driving across a platform. I passed anyway, and somehow got accepted into the University of Washington, but only once I wrote them an angry appeal letter after being rejected.
I prepared to move out and start my new life free of my parents. Ok, the game hadn't worked out, but that summer I'd get the rest of the basics done, and during my first college quarter I can get an initial alpha out and I'll be able to pay for my dorm that way! This, of course, was also a complete and utter failure. I ran back home with my tail between my legs after only a single quarter in that hellhole and half my savings gone from paying for housing, after realizing half my engine was broken and needed to be rebuilt from scratch. I also discovered that I was terrible at network programming.
I figured I needed help on focusing on work more, so I tried to build a productivity app, only to learn that GTK+ is almost impossible to work with and Qt has a 1.5 gigabyte SDK of madness. Then I tried to build an alternative to MSN after its servers kept crashing and dropping messages, but that failed miserably for similar reasons. It was around this time I realized that the one thing I thought I hadn't failed at - my simple audio engine - was actually complete garbage and almost totally useless.
Throughout my second year, I attempted to reconstruct the engine, and intended to port it to C# so a friend could use it. This, of course, also failed. It was during my second year I came up with a pivotal, brilliant idea that I was never able to work on because I lacked the foundation necessary to make it feasible. My work on that foundation was then interrupted by realizing I needed to build a physics editor, which in turn made me realize that CEGUI is terrible. So not only did I fail to reconstruct the engine, I also failed to build the physics editor, and every single other editor, and the editor I built to make editors.
By chance, at the beginning of 2011, the amazing person I still considered an idol suddenly needed a programmer for his game. Seizing the opportunity, I successfully got a chance to build a prototype, put everything else on hold and got to work immediately.
Then my mom had a heart attack and nearly died. I became more determined than ever to make sure 2011 would be the year I finally managed to do something. Anything. So of course I discovered my animation system was broken and eventually had to put the prototype on hold. Then I only got a 3.4 in the computer science class, and ended up having to major in Applied Mathematics instead of computer science because they wouldn't let me in. Thus, I had failed at getting into the major that was the entire reason I had wanted to attend the UW in the first place.
I wanted to get serious, and finally created my company sometime in November 2011, possibly more out of desperation than for any real reason. I completely failed at finishing anything at all for 2011. I decided 2012 would be the year everything changed and went back to my physics editor, determined to make it work.
So of course I failed at that too. Then I failed at finishing my productivity manager. Then I tried to build a puzzle game so drop-dead simple there was no way I couldn't finish it and completely failed anyway. At this point college dumped so much homework on me I was virtually incapacitated for 6 months. I was convinced I had to get the puzzle game to at least be functional by the end of summer 2012, so naturally I failed to do that. I then decided I needed the tile game to be up and running with a demo in a month or two, and continued my amazing streak of utter failure.
2012 is almost over and the world is supposed to end in a few days. Two weeks ago I released my first commercial album. After my entire life being a miserable failure at everything I cared about, I decided my goal for the album was to make a measly $45. Surely, I can meet a goal that is so pathetically low all it does is pay off how much it cost to get it into iTunes and Google Play? I was pushing the album on every single social media outlet I had access to. I made $15.
2013 will be my sixth year of fighting for this dream. My dream has been torn apart and shredded into a ghost of what it once was. Now all I want is to just be able to feed myself without living in my parent's house. Screw being famous or rich, I just want to make a living doing something that doesn't make me want to throw myself off a cliff. I guess wanting a job that you don't absolutely despise is stupid, idealistic thinking.
I wish I had something to show. I wish I could say, look at this thing I built that nobody is looking at! But I've failed at everything so hard I don't have a single completed project. The only thing I have to show for the last 5 years is a useless piece of paper in a major I didn't even want and a list of failures so long it's disturbing to look at. So why then, do I continue in this hilariously idealistic dream that is clearly never going to work? Because I am numb to failure at this point. I can't do anything else. I simply trudge onward, relentlessly fighting against this endless storm of not being good enough, hoping that next year, next year will be different... I will fail a thousand times if I have to just to make this happen.
Because dreams are worth fighting for.
December 18, 2012
December 14, 2012
Giant List of FREE SAMPLES
Back when I started making music, I quickly realized that trying to make sample-based music without a significant sample library is really hard. While there are thousands of sample libraries for sale, the problem with starting out is that, by definition, you suck, you're probably in high school, and your allowance is likely going towards other, more important things, and not your silly musical experiments.
Over the years, I've been frustrated with how difficult it is to find good free samples. It turns out that there's some really great royalty-free stuff out there, if you can find it. This post is a categorized list of every single sample pack and soundfont I have that is royalty-free, and of reasonable quality in an effort to give beginner musicians a much better foundation to build their music off of. Some of these are soundfonts compressed with sfArk - just drag the file on to sfArkXTc.exe to generate a self-extracting executable that will create the sf2 file when run. Others are compressed using sfpack.exe. A few use the SFZ format (usually when the soundfont in question would take up too much RAM), which can be read by the free sForzando VSTi.
198-StratocasterVS
MIS Stereo Piano [ SoundFont | Directwave ]
MIS Orchestra Samples [Requires Directwave]
SGM-V2.01
Nylon Guitar 1
Sonatina Symphonic Orchestra SF2 (original SFZ)
HQ Orchestra Samples
070 Bassoon Ethan Nando
Roland Sound Canvas
Tubular Bells
Maestro Concert Piano (SFZ)
1400 Samples Drum Kit
mhak kicks
drums_industrial
drums_ken_ardency
RolandOrchestralRythm
JD Rockset 5
Ellade Drums
The Freq's Glitch Hop Sample Pack
Bitcrusher
Tube Screamer
Rez v2.0
Reaktor 5 (free version)
Reaktor 5 Factory Selection
Mikro Prism [Requires Reaktor]
dmiHammer
Jug Drum (free version) [Requires Kontact]
Dryer Drum [Requires Kontact]
The SimulAnalog Guitar Suite is very nice, but is for non-commercial use only.
What are these samples capable of? You know you can't ask me that without getting a bunch of disgusting self-promoting links for a programmer's pathetic music making attempts, right? Oh well, if you insist:
More songs like these can be found in their respective albums:
Solar Noise - EP
Aurora Theory
These samples are extremely versatile, so don't mistake my own laughable attempts at using them for some imaginary limitations. There are thousands of samples in these packs, so get creative!
Over the years, I've been frustrated with how difficult it is to find good free samples. It turns out that there's some really great royalty-free stuff out there, if you can find it. This post is a categorized list of every single sample pack and soundfont I have that is royalty-free, and of reasonable quality in an effort to give beginner musicians a much better foundation to build their music off of. Some of these are soundfonts compressed with sfArk - just drag the file on to sfArkXTc.exe to generate a self-extracting executable that will create the sf2 file when run. Others are compressed using sfpack.exe. A few use the SFZ format (usually when the soundfont in question would take up too much RAM), which can be read by the free sForzando VSTi.
Instrumental
000_Florestan_Piano198-StratocasterVS
MIS Stereo Piano [ SoundFont | Directwave ]
MIS Orchestra Samples [Requires Directwave]
SGM-V2.01
Nylon Guitar 1
Sonatina Symphonic Orchestra SF2 (original SFZ)
HQ Orchestra Samples
- Percussion 1 Percussion 2 Strings 1 Strings 2 Strings 3 Brass/Woodwinds Choir/SFX Bank 1 Bank 2 Bank 3
070 Bassoon Ethan Nando
Roland Sound Canvas
Tubular Bells
Maestro Concert Piano (SFZ)
Percussion
SAMPLES - 5000 DRUMHITS1400 Samples Drum Kit
mhak kicks
drums_industrial
drums_ken_ardency
RolandOrchestralRythm
JD Rockset 5
Ellade Drums
The Freq's Glitch Hop Sample Pack
VSTi
While not samples, these are free VSTi plugins, compatible with most modern DAWs.Bitcrusher
Tube Screamer
Rez v2.0
Reaktor 5 (free version)
Reaktor 5 Factory Selection
Mikro Prism [Requires Reaktor]
dmiHammer
Jug Drum (free version) [Requires Kontact]
Dryer Drum [Requires Kontact]
The SimulAnalog Guitar Suite is very nice, but is for non-commercial use only.
What are these samples capable of? You know you can't ask me that without getting a bunch of disgusting self-promoting links for a programmer's pathetic music making attempts, right? Oh well, if you insist:
More songs like these can be found in their respective albums:
Solar Noise - EP
Aurora Theory
These samples are extremely versatile, so don't mistake my own laughable attempts at using them for some imaginary limitations. There are thousands of samples in these packs, so get creative!
November 13, 2012
The Weekend Apelsin Got Lost All The Time
So on thursday morning my friend apelsin mentioned a hackathon at his college, San Jose University. I expressed interest but pointed out that I live 800 miles away. So he bought me a ticket on the last plane flying to San Francisco that night. 10 hours after he had mentioned this hackathon with me, I was riding on a train to San Jose University (after missing one train and getting off on the wrong stop).
The hackathon had prizes - $3100, $1500, and $600 for 1st, 2nd, and 3rd, respectively. The goal was to build, in 24 hours, a AI for a game. The game was a 7x7 board with empty tiles on it. To win a round, you must construct a continuous path from one side to the other. Player 1 must construct a path connecting the left and right sides, while player 2 must construct a path from top to bottom. The squares are grouped into randomized sets, which are then run through over and over, and each time a set is made available, each bot places a bid on the set. The highest bid gets to pick any square from that set and claim it. You had 98 credits to bid with (7*7*2 = 98) and there was no way to get more.
I was able to build an atemporal pathfinding algorithm that constructed a 7-turn path by figuring out in advance which squares would be available which turn. To prevent difficult edge cases I simplified it so it would always try to do it in 7 turns and simply bid 14 credits each time. My friend helped with debugging and brainstorming ideas, as well as setting everything up, and we had to keep correcting each other on how the game actually worked, but the entire time I was also learning how to use a mac, since my only laptop was a piece of shit and he only had macs.
We were also trying to use Codelite in the hopes that it would be better than Xcode, but that ended up being a disaster with constant crashes and failed debuggers. As a result, I learned two things from this hackathon: I fucking hate macs, and Codelite is a horrible, unstable IDE you should never use.
I was only able to stabilize our algorithm with 40 minute left in the competition. I attempted to implement an aggressive analysis algorithm that would detect a potential win condition for the opponent and block it, but it never worked. Had I instead extended the algorithm to simply look for alternative paths longer than 7 turns, we probably would have won, but I didn't realize that until after the competition. Even though we only had a very basic algorithm, our pathfinding was extremely strong and as a result we got all the way to 4th place. Unfortunately, that wasn't enough for a cash prize, so we weren't able to recoup the plane ticket cost.
So then we took a train back up to his house in San Francisco, went out for dinner, and did more music stuff with his crazy studio monitor setup. The next day involved fisherman's wharf, candy, being lost, and getting to play around on a hardware synth. Then I went to bed and today I got flown back to Seattle.
All because my friend looked at a hackathon advertisement on Thursday morning.
The hackathon had prizes - $3100, $1500, and $600 for 1st, 2nd, and 3rd, respectively. The goal was to build, in 24 hours, a AI for a game. The game was a 7x7 board with empty tiles on it. To win a round, you must construct a continuous path from one side to the other. Player 1 must construct a path connecting the left and right sides, while player 2 must construct a path from top to bottom. The squares are grouped into randomized sets, which are then run through over and over, and each time a set is made available, each bot places a bid on the set. The highest bid gets to pick any square from that set and claim it. You had 98 credits to bid with (7*7*2 = 98) and there was no way to get more.
I was able to build an atemporal pathfinding algorithm that constructed a 7-turn path by figuring out in advance which squares would be available which turn. To prevent difficult edge cases I simplified it so it would always try to do it in 7 turns and simply bid 14 credits each time. My friend helped with debugging and brainstorming ideas, as well as setting everything up, and we had to keep correcting each other on how the game actually worked, but the entire time I was also learning how to use a mac, since my only laptop was a piece of shit and he only had macs.
We were also trying to use Codelite in the hopes that it would be better than Xcode, but that ended up being a disaster with constant crashes and failed debuggers. As a result, I learned two things from this hackathon: I fucking hate macs, and Codelite is a horrible, unstable IDE you should never use.
I was only able to stabilize our algorithm with 40 minute left in the competition. I attempted to implement an aggressive analysis algorithm that would detect a potential win condition for the opponent and block it, but it never worked. Had I instead extended the algorithm to simply look for alternative paths longer than 7 turns, we probably would have won, but I didn't realize that until after the competition. Even though we only had a very basic algorithm, our pathfinding was extremely strong and as a result we got all the way to 4th place. Unfortunately, that wasn't enough for a cash prize, so we weren't able to recoup the plane ticket cost.
So then we took a train back up to his house in San Francisco, went out for dinner, and did more music stuff with his crazy studio monitor setup. The next day involved fisherman's wharf, candy, being lost, and getting to play around on a hardware synth. Then I went to bed and today I got flown back to Seattle.
All because my friend looked at a hackathon advertisement on Thursday morning.
October 21, 2012
Today I Was Mistaken For A 17-Year-Old Girl
This is what they had to say about me:
This was in response to an article I posted on reddit about a 17-year-old american girl who put on a hijab and went to a mall for 2 hours. She describes being completely ignored by everyone, save for a 4 year old girl who asked if she was a terrorist. All because she wore a scarf on her head.
I have read about hundreds of horrifying accounts of sexism and seen thousands of sickening displays of misogynistic hatred (just dig around youtube for 5 seconds). But, as they say, it's never quite the same until it happens to you.
It was just funny at first - some dumbass thought I wrote the article just because I submitted it? No wonder he was so full of impulsive hatred. Perhaps he was trolling, or thought it was funny to brutally attack a woman for committing the crime of being born. But as I read the message a few more times, it dawned on me that this was simply an errant fool mistaking me for the opposite gender, yet the champions of feminism must get these kinds of messages all the time.
Of course, I am no stranger to controversy, having spent hours defending my bountiful collection of unpopular opinions about programming languages that no one should really care about. Hundreds of people have felt it necessary to inform me how horribly wrong all my opinions are, and how I'm so bad at programming the world would be a better place if I never wrote another line of code. So why did this message pierce my internet-hate-machine defenses - a message that wasn't even directed at me? Why did it make me think about what it would be like to be a woman and have my inbox full of this vitriolic misogyny every day just because I had an opinion?
In every technical argument, every hateful comment directed at me, they were all due to choices I made. If my opinions were, in fact, so terribly wrong, it was not because I was a horrible human being, it was simply because I made the wrong choices. When somebody calls you a bitch and tells you to bend over, they are not saying this because of choices you made, they are saying this because you are female. As if the fact that you lack a Y-chromosome gives them an innate right to belittle you and strip away your humanity. There is a difference between being told that you are a stupid idiot, and being considered subhuman. It is this subtle, yet infinitely important difference that many people seem to miss.
While I was still young and in primary school, I thought the only difference between boys and girls was that one had a penis and one didn't. It seems that society has failed to manage a level of maturity greater than that of a 9-year-old boy. In fact, when I was in kindergarten, we were playing a game, and the teams were girls vs boys, as usual. I noticed, however, that the girls were significantly outnumbered. In a bid that surprised even myself, I declared that I would join the girls team in order to make things fair.
I wish it were that easy, but supporting women's rights isn't necessarily about standing up for women - it's about letting women stand up for themselves. It's about treating them like normal human beings and giving them the opportunity to solve their own problems. It's about respecting them because of who they are, not simply due to their gender.
Perhaps one day, society can rise to a 9-year-old kid's level of sophistication.
This was in response to an article I posted on reddit about a 17-year-old american girl who put on a hijab and went to a mall for 2 hours. She describes being completely ignored by everyone, save for a 4 year old girl who asked if she was a terrorist. All because she wore a scarf on her head.
I have read about hundreds of horrifying accounts of sexism and seen thousands of sickening displays of misogynistic hatred (just dig around youtube for 5 seconds). But, as they say, it's never quite the same until it happens to you.
It was just funny at first - some dumbass thought I wrote the article just because I submitted it? No wonder he was so full of impulsive hatred. Perhaps he was trolling, or thought it was funny to brutally attack a woman for committing the crime of being born. But as I read the message a few more times, it dawned on me that this was simply an errant fool mistaking me for the opposite gender, yet the champions of feminism must get these kinds of messages all the time.
Of course, I am no stranger to controversy, having spent hours defending my bountiful collection of unpopular opinions about programming languages that no one should really care about. Hundreds of people have felt it necessary to inform me how horribly wrong all my opinions are, and how I'm so bad at programming the world would be a better place if I never wrote another line of code. So why did this message pierce my internet-hate-machine defenses - a message that wasn't even directed at me? Why did it make me think about what it would be like to be a woman and have my inbox full of this vitriolic misogyny every day just because I had an opinion?
In every technical argument, every hateful comment directed at me, they were all due to choices I made. If my opinions were, in fact, so terribly wrong, it was not because I was a horrible human being, it was simply because I made the wrong choices. When somebody calls you a bitch and tells you to bend over, they are not saying this because of choices you made, they are saying this because you are female. As if the fact that you lack a Y-chromosome gives them an innate right to belittle you and strip away your humanity. There is a difference between being told that you are a stupid idiot, and being considered subhuman. It is this subtle, yet infinitely important difference that many people seem to miss.
While I was still young and in primary school, I thought the only difference between boys and girls was that one had a penis and one didn't. It seems that society has failed to manage a level of maturity greater than that of a 9-year-old boy. In fact, when I was in kindergarten, we were playing a game, and the teams were girls vs boys, as usual. I noticed, however, that the girls were significantly outnumbered. In a bid that surprised even myself, I declared that I would join the girls team in order to make things fair.
I wish it were that easy, but supporting women's rights isn't necessarily about standing up for women - it's about letting women stand up for themselves. It's about treating them like normal human beings and giving them the opportunity to solve their own problems. It's about respecting them because of who they are, not simply due to their gender.
Perhaps one day, society can rise to a 9-year-old kid's level of sophistication.
October 20, 2012
C# to C++ Tutorial - Part 4: Operator Overload
[ 1 · 2 · 3 · 4 · 5 · 6 · 7 ]
If you are familiar with C#, you should be familiar with the difference between C#'s
C++ also lets you pass in parameters by reference and by value, however it is more explicit about what is happening, so there is no default behavior to know about. If you simply declare the type itself, for example
However, in order to copy something, C++ needs to know how to properly copy your class. This gives rise to the copy constructor. By default, the compiler will automatically generate a copy constructor for your class that simply invokes all the default copy constructors of whatever member variables you have, just like C#. If, however, your class is holding on to a pointer, then this is going to cause a giant mess when two classes are pointing to the same thing and one of the deletes what it's pointing to! By specifying a copy constructor, we can deal with the pointer properly:
This copy constructor can be invoked manually, but it will simply be implicitly called whenever its needed. Of course, that isn't the only time we need to deal with our rogue pointer that screws things up. What happens when we set our class equal to another class? Remember, a reference cannot be changed after it is created. Observe the following behavior:
So somehow, we have to overload the assignment operator! This brings us to Operator Overloading. C# operator overloading works by defining global operator overloads, ones that take a left and a right argument, and are static functions. By default, C++ operator overloading only take the right argument. The left side of the equation is implied to be the class itself. Consequently, C++ operators are not static. C++ does have global operators, but they are defined outside the class, and the assignment operator isn't allowed as a global operator; you have to define it inside the class. All the overload-able operators are shown below with appropriate declarations:
We can see that the assignment operator mimics the arguments of our copy constructor. For the most part, it does the exact same thing; the only difference is that existing values must be destroyed, an operation that should mostly mimic the destructor. We extend our previous class to have an assignment operator accordingly:
These operations take an instance of the class and copy it's values to our instance. Consequently, these are known as copy semantics. If this was 1998, we'd stop here, because for a long time, C++ only had copy semantics. Either you passed around references to objects, or you copied them. You could also pass around pointers to objects, but remember that pointers are value types just like integers and floats, so you are really just copying them around too. In fact, until recently, you were not allowed to have references to pointers. Pointers were the one data type that had to be passed by value. Provided you are using a C++0x-compliant compiler, this is no longer true, as you may remember from our first examples. The new standard released in 2011 allows references to pointers, and introduces move semantics.
Move semantics are designed to solve the following problem. If we have a series of dynamic string objects being concatenated, with normal copy constructors we run into a serious problem:
This is terribly inefficient; it would be much more efficient if we could utilize the temporary objects that are going to be destroyed anyway instead of reallocating a bunch of memory over and over again only to delete it immediately afterwards. This is where move semantics come in to play. First, we need to define a "temporary" object as one whose scope is entirely contained on the right side of an expression. That is to say, given a single assignment statement
The idea behind a move constructor is that, instead of copying the values into our object, we move them into our object, setting the source to some
But wait, just like a copy constructor has an assignment copy operator, a move constructor has an equivalent assignment move operator. Just like the copy assignment, the move operator behaves exactly like the move constructor, but must destroy the existing object beforehand. The assignment move operator is declared like this:
Move semantics can be used for some interesting things, like unique pointers, that only have move semantics - by disabling the copy constructor, you can create an object that is impossible to copy, and can therefore only be moved, which guarantees that there will only be one copy of its contents in existence.
There is an important detail when you are using inheritance or objects with move semantics:
Here we are using
There's some other weird things you can do with move semantics. This most interesting part is the strange behavior of
By taking advantage of the second and fourth lines, we can perform perfect forwarding. Perfect forwarding allows us to pass an argument as either a normal reference (
Notice that this allows us to assign our data object using either the copy assignment, or the move assignment operator, by using
Notice the use of
If you are familiar with C#, you should be familiar with the difference between C#'s
struct
and class
declarations. Namely, a struct
is a value type and a class
is a reference type, meaning that if you pass a struct to a function, its default behavior is for the entire struct to be copied into the function's parameter, so any modifications made to it won't affect whatever was passed in. On the flip side, a class is a reference value, so a reference is passed into the function, and any changes made to that reference will be reflected in the object that was originally passed into the function.// Takes an integer, or a basic value type public static int add(int v) { v+=3; return 4+v; } public struct Oppa { public string gangnam; } // Takes a struct, or a complex value type public static Oppa style(Oppa g) { g.gangnam="notstyle"; return g; } public class Psy { public int style; } // Takes a class, or a reference type public static void change(Psy psy) { psy.style=5; } // Takes an integer, but forces it to be passed by reference instead of by value. public static int addref(ref int v) { v+=3; return 4+v; } int a = 0; int b = add(a); // a is still 0 // b is now 7 int c = addref(a); // a is now 3, because it was passed by reference // c is now 7 Oppa s1; s1.gangnam="style"; Oppa s2 = style(s1); //s1.gangnam is still "style" //s2.gangnam is now "notstyle" Psy psy = new Psy(); psy.style=0; change(psy); // psy.style is now 5, because it was passed by reference
C++ also lets you pass in parameters by reference and by value, however it is more explicit about what is happening, so there is no default behavior to know about. If you simply declare the type itself, for example
(myclass C, int B)
, then it will be passed by value and copied. If, however, you use the reference symbol that we've used before in variable declarations, it will be passed by reference. This happens no matter what. If a reference is passed into a function that takes a value, it will still have a copy made.// Integer passed by value int add(int v) { v+=3; return 4+v; } class Psy { public: int style; }; // Class passed by value Psy change(Psy psy) { psy.style=5; return psy; } // Integer passed by reference int addref(int& v) { v+=3; return 4+v; } // Class passed by reference Psy changeref(Psy& psy) { psy.style=5; return psy; } int horse = 2; int korea = add(horse); // horse is still 2 // korea is now 9 int horse2 = 2; int korea2 = addref(horse2); // horse2 is now 5 // korea2 is now 9 Psy psy; psy.style = 0; Psy ysp = change(psy); // psy.style is still 0 // ysp.style is now 5 Psy psy2; psy2.style = 0; Psy ysp2 = changeref(psy2); // psy2.style is now 5 // ysp2.style is also 5
However, in order to copy something, C++ needs to know how to properly copy your class. This gives rise to the copy constructor. By default, the compiler will automatically generate a copy constructor for your class that simply invokes all the default copy constructors of whatever member variables you have, just like C#. If, however, your class is holding on to a pointer, then this is going to cause a giant mess when two classes are pointing to the same thing and one of the deletes what it's pointing to! By specifying a copy constructor, we can deal with the pointer properly:
class myString { public: // The copy constructor, which copies the string over instead of copying the pointer myString(const myString& copy) { size_t len = strlen(copy._str)+1; //+1 for null terminator _str=new char[len]; memcpy(_str,copy._str,sizeof(char)*len); } // Normal constructor myString(const char* str) { size_t len = strlen(str); _str=new char[len]; memcpy(_str,str,sizeof(char)*len); } // Destructor that deallocates our string ~myString() { delete [] _str; } private: char* _str; };
This copy constructor can be invoked manually, but it will simply be implicitly called whenever its needed. Of course, that isn't the only time we need to deal with our rogue pointer that screws things up. What happens when we set our class equal to another class? Remember, a reference cannot be changed after it is created. Observe the following behavior:
int a = 3; int b = 2; int& ra = a; int* pa = &a; b = a; //b is now 3 a = 0; //b is still 3, but a is now 0 b = ra; // b is now 0 a = 5; // b is still 0 but now a is 5 b = *pa; // b is now 5 b = 8; // b is now 8 but a is still 5 ra = b; //a is now 8! This assigns b's values to ra, it does NOT change the reference! ra = 9; //a is now 9, and b is still 8! ra STILL refers to a, and NOTHING can change that. pa = &b; // Now pa points to to b a = *pa; // a is now 8, because pointers CAN be changed. *pa = 7; // Now b is 7, but a is still 8 int*& rpa = pa; //Now we have a reference to a pointer (C++11) //rpa = 5; // COMPILER ERROR, rpa is a reference to a POINTER int** ppa = &pa; //rpa = ppa; // COMPILER ERROR, rpa is a REFERENCE to a pointer, not a pointer to a pointer! rpa = &a; //now pa points to a again. This does NOT change the reference! b = *pa; // Now b is 8, the same as a.
So somehow, we have to overload the assignment operator! This brings us to Operator Overloading. C# operator overloading works by defining global operator overloads, ones that take a left and a right argument, and are static functions. By default, C++ operator overloading only take the right argument. The left side of the equation is implied to be the class itself. Consequently, C++ operators are not static. C++ does have global operators, but they are defined outside the class, and the assignment operator isn't allowed as a global operator; you have to define it inside the class. All the overload-able operators are shown below with appropriate declarations:
class someClass { someClass operator =(anything b); // me=other someClass operator +(anything b); // me+other someClass operator -(anything b); // me-other someClass operator +(); // +me someClass operator -(); // -me (negation) someClass operator *(anything b); // me*other someClass operator /(anything b); // me/other someClass operator %(anything b); // me%other someClass& operator ++(); // ++me someClass& operator ++(int); // me++ someClass& operator --(); // --me someClass& operator --(int); // me-- // All operators can TECHNICALLY return any value whatsoever, but for many of them only certain values make sense. bool operator ==(anything b); bool operator !=(anything b); bool operator >(anything b); bool operator <(anything b); bool operator >=(anything b); bool operator <=(anything b); bool operator !(); // !me // These operators do not usually return someClass, but rather a type specific to what the class does. anything operator &&(anything b); anything operator ||(anything b); anything operator ~(); anything operator &(anything b); anything operator |(anything b); anything operator ^(anything b); anything operator <<(anything b); anything operator >>(anything b); someClass& operator +=(anything b); // Should always return *this; someClass& operator -=(anything b); someClass& operator *=(anything b); someClass& operator /=(anything b); someClass& operator %=(anything b); someClass& operator &=(anything b); someClass& operator |=(anything b); someClass& operator ^=(anything b); someClass& operator <<=(anything b); someClass& operator >>=(anything b); anything operator [](anything b); // This will almost always return a reference to some internal array type, like myElement& anything operator *(); anything operator &(); anything* operator ->(); // This has to return a pointer or some other type that has the -> operator defined. anything operator ->*(anything a); anything operator ()(anything a1, U a2, ...); anything operator ,(anything b); operator otherThing(); // Allows this class to have an implicit conversion to type otherThing void* operator new(size_t x); // These are called when you write new someClass() void* operator new[](size_tx); // new someClass[num] void operator delete(void*x); // delete pointer_to_someClass void operator delete[](void*x); // delete [] pointer_to_someClass }; // These are global operators that behave more like C# operators, but must be defined outside of classes, and a few operators do not have global overloads, which is why they are missing from this list. Again, operators can technically take or return any value, but normally you only override these so you can handle some other type being on the left side. someClass operator +(anything a, someClass b); someClass operator -(anything a, someClass b); someClass operator +(someClass a); someClass operator -(someClass a); someClass operator *(anything a, someClass b); someClass operator /(anything a, someClass b); someClass operator %(anything a, someClass b); someClass operator ++(someClass a); someClass operator ++(someClass a, int); // Note the unnamed dummy-parameter int - this differentiates between prefix and suffix increment operators. someClass operator --(someClass a); someClass operator --(someClass a, int); // Note the unnamed dummy-parameter int - this differentiates between prefix and suffix decrement operators. bool operator ==(anything a, someClass b); bool operator !=(anything a, someClass b); bool operator >(anything a, someClass b); bool operator <(anything a, someClass b); bool operator >=(anything a, someClass b); bool operator <=(anything a, someClass b); bool operator !(someClass a); bool operator &&(anything a, someClass b); bool operator ||(anything a, someClass b); someClass operator ~(someClass a); someClass operator &(anything a, someClass b); someClass operator |(anything a, someClass b); someClass operator ^(anything a, someClass b); someClass operator <<(anything a, someClass b); someClass operator >>(anything a, someClass b); someClass operator +=(anything a, someClass b); someClass operator -=(anything a, someClass b); someClass operator *=(anything a, someClass b); someClass operator /=(anything a, someClass b); someClass operator %=(anything a, someClass b); someClass operator &=(anything a, someClass b); someClass operator |=(anything a, someClass b); someClass operator ^=(anything a, someClass b); someClass operator <<=(anything a, someClass b); someClass operator >>=(anything a, someClass b); someClass operator *(someClass a); someClass operator &(someClass a); someClass operator ->*(anything a, someClass b); someClass operator ,(anything a, someClass b); void* operator new(size_t x); void* operator new[](size_t x); void operator delete(void* x); void operator delete[](void*x);
We can see that the assignment operator mimics the arguments of our copy constructor. For the most part, it does the exact same thing; the only difference is that existing values must be destroyed, an operation that should mostly mimic the destructor. We extend our previous class to have an assignment operator accordingly:
class myString { public: // The copy constructor, which copies the string over instead of copying the pointer myString(const myString& copy) { size_t len = strlen(copy._str)+1; //+1 for null terminator _str=new char[len]; memcpy(_str,copy._str,sizeof(char)*len); } // Normal constructor myString(const char* str) { size_t len = strlen(str); _str=new char[len]; memcpy(_str,str,sizeof(char)*len); } // Destructor that deallocates our string ~myString() { delete [] _str; } // Assignment operator, does the same thing the copy constructor does, but also mimics the destructor by deleting _str. NOTE: It is considered bad practice to call the destructor directly. Use a Clear() method or something equivalent instead. myString& operator=(const myString& right) { delete [] _str; size_t len = strlen(right._str)+1; //+1 for null terminator _str=new char[len]; memcpy(_str,right._str,sizeof(char)*len); } private: char* _str; };
These operations take an instance of the class and copy it's values to our instance. Consequently, these are known as copy semantics. If this was 1998, we'd stop here, because for a long time, C++ only had copy semantics. Either you passed around references to objects, or you copied them. You could also pass around pointers to objects, but remember that pointers are value types just like integers and floats, so you are really just copying them around too. In fact, until recently, you were not allowed to have references to pointers. Pointers were the one data type that had to be passed by value. Provided you are using a C++0x-compliant compiler, this is no longer true, as you may remember from our first examples. The new standard released in 2011 allows references to pointers, and introduces move semantics.
Move semantics are designed to solve the following problem. If we have a series of dynamic string objects being concatenated, with normal copy constructors we run into a serious problem:
std::string result = std::string("Oppa") + std::string(" Gangnam") + std::string(" Style") + std::string(" by") + std::string(" Psy"); // This is evaluated by first creating a new string object with its own memory allocation, then deallocating both " by" and " Psy" after copying their contents into the new one //std::string result = std::string("Oppa") + std::string(" Gangnam") + std::string(" Style") + std::string(" by Psy"); // Then another new object is made and " by Psy" and " Style" are deallocated //std::string result = std::string("Oppa") + std::string(" Gangnam") + std::string(" Style by Psy"); // And so on and so forth //std::string result = std::string("Oppa") + std::string(" Gangnam Style by Psy"); //std::string result = std::string("Oppa Gangnam Style by Psy"); // So just to add 5 strings together, we've had to allocate room for 5 additional strings in the middle of it, 4 of which are then simply deallocated!
This is terribly inefficient; it would be much more efficient if we could utilize the temporary objects that are going to be destroyed anyway instead of reallocating a bunch of memory over and over again only to delete it immediately afterwards. This is where move semantics come in to play. First, we need to define a "temporary" object as one whose scope is entirely contained on the right side of an expression. That is to say, given a single assignment statement
a=b
, if an object is both created and destroyed inside b
, then it is considered temporary. Because of this, these temporary values are also called rvalues, short for "right values". C++0x introduces the syntax variable&&
to designate an rvalue. This is how you declare a move constructor:class myString { public: // The copy constructor, which copies the string over instead of copying the pointer myString(const myString& copy) { size_t len = strlen(copy._str)+1; //+1 for null terminator _str=new char[len]; memcpy(_str,copy._str,sizeof(char)*len); } // Move Constructor myString(myString&& mov) { _str = mov._str; mov._str=NULL; } // Normal constructor myString(const char* str) { size_t len = strlen(str); _str=new char[len]; memcpy(_str,str,sizeof(char)*len); } // Destructor that deallocates our string ~myString() { if(_str!=NULL) // Make sure we only delete _str if it isn't NULL! delete [] _str; } // Assignment operator, does the same thing the copy constructor does, but also mimics the destructor by deleting _str. NOTE: It is considered bad practice to call the destructor directly. Use a Clear() method or something equivalent instead. myString& operator=(const myString& right) { delete [] _str; size_t len = strlen(right._str)+1; //+1 for null terminator _str=new char[len]; memcpy(_str,right._str,sizeof(char)*len); return *this; } private: char* _str; };NOTE: Observe that our destructor functionality was changed! Now that _str can be NULL, we have to check for that before deleting the object.
The idea behind a move constructor is that, instead of copying the values into our object, we move them into our object, setting the source to some
NULL
value. Notice that this can only work for pointers, or objects containing pointers. Integers, floats, and other similar types can't really be "moved", so instead their values are simply copied over. Consequently, move semantics is only beneficial for types like strings that involve dynamic memory allocation. However, because we must set the source pointers to 0, that means we can't use const myString&&
, because then we wouldn't be able to modify the source pointers! This is why a move constructor is declared without a const modifier, which makes sense, since we intend to modify the object.But wait, just like a copy constructor has an assignment copy operator, a move constructor has an equivalent assignment move operator. Just like the copy assignment, the move operator behaves exactly like the move constructor, but must destroy the existing object beforehand. The assignment move operator is declared like this:
myString& operator=(myString&& right) { delete [] _str; _str=right._str; right._str=0; return *this; }
Move semantics can be used for some interesting things, like unique pointers, that only have move semantics - by disabling the copy constructor, you can create an object that is impossible to copy, and can therefore only be moved, which guarantees that there will only be one copy of its contents in existence.
std::unique_ptr
is an implementation of this provided in C++0x. Note that if a data structure requires copy semantics, std::unique_ptr
will throw a compiler error, instead of simply mysteriously failing like the deprecated std::autoptr
.There is an important detail when you are using inheritance or objects with move semantics:
class Substring : myString { Substring(Substring&& mov) : myString(std::move(mov)) { _sub = std::move(mov._sub); } Substring& operator=(Substring&& right) { myString::operator=(std::move(right)); _sub = std::move(mov._sub); return *this; } myString _sub; };
Here we are using
std::move()
, which takes a variable (that is either an rvalue or a normal reference) and returns an rvalue for that variable. This is because rvalues stop being rvalues the instant they are passed into a different function, which makes sense, since they are no longer on the right-hand side anymore. Consequently, if we were to pass mov
above into our base class, it would trigger the copy constructor, because mov
would be treated as const Substring&
, instead of Substring&&
. Using std::move
lets us pass it in as Substring&&
and properly trigger the move semantics. As you can see in the example, you must use std::move
when moving any complex object, using base class constructors, or base class assignment operators. Note that std::move
allows you to force an object to be moved to another object regardless of whether or not its actually an rvalue. This would be particularly useful for moving around std::unique_ptr
objects.There's some other weird things you can do with move semantics. This most interesting part is the strange behavior of
&&
when it is appended to existing references.A& &
becomesA&
A& &&
becomesA&
A&& &
becomesA&
A&& &&
becomesA&&
By taking advantage of the second and fourth lines, we can perform perfect forwarding. Perfect forwarding allows us to pass an argument as either a normal reference (
A&
) or an rvalue (A&&
) and then forward it into another function, preserving its status as an rvalue or a normal reference, including whether or not it's const A&
or const A&&
. Perfect forwarding can be implemented like so:template<typename U> void Set(U && other) { _str=std::forward<U>(other); }
Notice that this allows us to assign our data object using either the copy assignment, or the move assignment operator, by using
std::forward<U>()
, which transforms our reference into either an rvalue if it was an rvalue, or a normal reference if it was a normal reference, much like std::move()
transforms everything into an rvalue. However, this requires a template, which may not always be correctly inferred. A more robust implementation uses two separate functions forwarding their parameters into a helper function:class myString { public: // The copy constructor, which copies the string over instead of copying the pointer myString(const myString& copy) { size_t len = strlen(copy._str)+1; //+1 for null terminator _str=new char[len]; memcpy(_str,copy._str,sizeof(char)*len); } // Move Constructor myString(myString&& mov) { _str = mov._str; mov._str=NULL; } // Normal constructor myString(const char* str) { size_t len = strlen(str); _str=new char[len]; memcpy(_str,str,sizeof(char)*len); } // Destructor that deallocates our string ~myString() { if(_str!=NULL) // Make sure we only delete _str if it isn't NULL! delete [] _str; } void Set(myString&& str) { _set<myString&&>(std::move(str)); } void Set(const myString& str) { _set<const myString&>(str); } // Assignment operator, does the same thing the copy constructor does, but also mimics the destructor by deleting _str. NOTE: It is considered bad practice to call the destructor directly. Use a Clear() method or something equivalent instead. myString& operator=(const myString& right) { delete [] _str; size_t len = strlen(right._str)+1; //+1 for null terminator _str=new char[len]; memcpy(_str,right._str,sizeof(char)*len); return *this; } private: template<typename U> void _set(U && other) { _str=std::forward<U>(other); } char* _str; };
Notice the use of
std::move()
to transfer the rvalue correctly, followed by std::forward<U>()
to forward the parameter. By using this, we avoid redundant code, but can still build move-aware data structures that efficiently assign values with relative ease. Now, its on to Part 5: Delegated Llamas! Or, well, delegates, function pointers, and lambdas. Possibly involving llamas. Maybe.
October 15, 2012
Lockless Lattice-Based Computing
The Flow Programming Language was introduced as a form of lattice-based computing1. This method of thinking about code structure was motivated by the need to make implicit parallelization of code possible at compile-time. Despite this, it has fascinated me for a number of reasons that extend well beyond simple multithreading applications. Most intriguing is its complete elimination of memory management2. A lattice-based programming language doesn't have manual memory allocation or garbage collection, because it knows when and where memory is needed throughout the entire program. This is particularly exciting to me, because for the longest time I've had to use C or C++ simply because they are the only high-performance languages in existence that don't have garbage collection, which can be catastrophic in real-time scenarios, like games.
Naturally, this new methodology is not without its flaws. In particular, it currently has no way of addressing the issue of what to do when one variable depends on the values of multiple other variables, which can give rise to race conditions.
In the above diagram, both
Instead of using an ugly lock, we can instead pair each variable up with a parameter counter. This counter keeps track of the number of dependent variables that have yet to be calculated. When a variable is resolved, it atomically decrements the parameter counter for all variables that are immediately dependent on it. If this atomic decrement results in a value of exactly 0, the thread can take one of 3 options:
1. Queue a task to the thread manager to execute the variable's code path at some point in the future.
2. Add the variable to its own execution queue and evaluate it later.
3. Evaluate the variable and its execution path immediately (only valid if there are no other possible paths of execution)
If a thread reaches the a situation where all variables dependent on the last value it evaluated have nonzero parameter counts, then the thread simply "dies" and is put back into the thread scheduler to take on another task. This solution takes advantage of the fact that any implicit parallelization solution will require a robust method of rapidly creating and destroying threads (most likely through some kind of task system). Consequently, by concentrating all potential race conditions and contention inside the thread scheduler itself, we can simply implement it with a lockless algorithm and eliminate every single race condition in the entire program.
It's important to note that while the thread scheduler may be implemented with a lock-free (or preferably wait-free) algorithm, the program itself does not magically become lock-free, simply because it can always end up being reduced to a value that everything else depends on which then locks up and freezes everything. Despite this, using parameter counters provides an elegant way of resolving race-conditions in lattice-based programming languages.
This same method can be used to construct various intriguing multi-threading scenarios, such as attempting to access a GPU. Calling a GPU
Of course, what if we wanted to render everything in a specific order? We introduce a secondary dependency in the draw call, such that it is dependent both on the primitive processing results, and a dummy variable that is resolved only when the drawing call directly before it is completed. If the primitive processing finishes before the previous drawing call is completed, the thread simply dies. When the previous drawing call is finished, that thread decrements the variable's parameter count to 0 and, because there are no other avenues of execution, simply carries on finishing the drawing call for the now dead thread. If, however, the previous drawing call is completed before the next primitive can be processed, the previous thread now dies, and when the primitive drawing is completed, that thread now carries on the drawing calls.
What is interesting is that this very closely mirrors my current implementation in C++ for achieving the exact same task in a real-world scenario, except it is potentially even more efficient due to its ability to entirely avoid dynamic memory allocation. This is very exciting news for high performance programmers, and supports the idea of Lattice-Based Programming becoming a valuable tool in the near future.
1 I'd much rather call it "Lattice-Based Programming", since it's a way of thinking about code and not just computing.
2 I would link to this if the original post had anchors to link to.
3 Also, I happen to have a deep-seated, irrational hatred of locks.
Naturally, this new methodology is not without its flaws. In particular, it currently has no way of addressing the issue of what to do when one variable depends on the values of multiple other variables, which can give rise to race conditions.
In the above diagram, both
e
and f
depend on multiple previous results. This is a potential race condition, and the only current proposed solution is using traditional locks, which would incur contention issues3. This, however, is not actually necessary. Instead of using an ugly lock, we can instead pair each variable up with a parameter counter. This counter keeps track of the number of dependent variables that have yet to be calculated. When a variable is resolved, it atomically decrements the parameter counter for all variables that are immediately dependent on it. If this atomic decrement results in a value of exactly 0, the thread can take one of 3 options:
1. Queue a task to the thread manager to execute the variable's code path at some point in the future.
2. Add the variable to its own execution queue and evaluate it later.
3. Evaluate the variable and its execution path immediately (only valid if there are no other possible paths of execution)
If a thread reaches the a situation where all variables dependent on the last value it evaluated have nonzero parameter counts, then the thread simply "dies" and is put back into the thread scheduler to take on another task. This solution takes advantage of the fact that any implicit parallelization solution will require a robust method of rapidly creating and destroying threads (most likely through some kind of task system). Consequently, by concentrating all potential race conditions and contention inside the thread scheduler itself, we can simply implement it with a lockless algorithm and eliminate every single race condition in the entire program.
It's important to note that while the thread scheduler may be implemented with a lock-free (or preferably wait-free) algorithm, the program itself does not magically become lock-free, simply because it can always end up being reduced to a value that everything else depends on which then locks up and freezes everything. Despite this, using parameter counters provides an elegant way of resolving race-conditions in lattice-based programming languages.
This same method can be used to construct various intriguing multi-threading scenarios, such as attempting to access a GPU. Calling a GPU
DrawPrimitive
method cannot be done from multiple threads simultaneously. Once again, we could solve this in a lattice-based programming language by simply introducing locks, but I hate locks. Surely there is a more elegant approach? One method would be to simply construct an algorithm that builds a queue of processed primitives that is then pumped into the GPU through a single thread. This hints at the idea that so long as a non-thread-safe function is isolated to one single thread of execution inside a program, we can guarantee it will never be called at the same time at compile time. In other words, it can only appear once per-level in the DAG.Of course, what if we wanted to render everything in a specific order? We introduce a secondary dependency in the draw call, such that it is dependent both on the primitive processing results, and a dummy variable that is resolved only when the drawing call directly before it is completed. If the primitive processing finishes before the previous drawing call is completed, the thread simply dies. When the previous drawing call is finished, that thread decrements the variable's parameter count to 0 and, because there are no other avenues of execution, simply carries on finishing the drawing call for the now dead thread. If, however, the previous drawing call is completed before the next primitive can be processed, the previous thread now dies, and when the primitive drawing is completed, that thread now carries on the drawing calls.
What is interesting is that this very closely mirrors my current implementation in C++ for achieving the exact same task in a real-world scenario, except it is potentially even more efficient due to its ability to entirely avoid dynamic memory allocation. This is very exciting news for high performance programmers, and supports the idea of Lattice-Based Programming becoming a valuable tool in the near future.
1 I'd much rather call it "Lattice-Based Programming", since it's a way of thinking about code and not just computing.
2 I would link to this if the original post had anchors to link to.
3 Also, I happen to have a deep-seated, irrational hatred of locks.
September 27, 2012
7 Problems Raytracing Doesn't Solve
I see a lot of people get excited about extreme concurrency in modern hardware bringing us closer to the magical holy grail of raytracing. It seems that everyone thinks that once we have raytracing, we can fully simulate entire digital worlds, everything will be photorealistic, and graphics will become a "solved problem". This simply isn't true, and in fact highlights several fundamental misconceptions about the problems faced by modern games and other interactive media.
For those unfamiliar with the term, raytracing is the process of rendering a 3D scene by tracing the path of a beam of light after it is emitted from a light source, calculating its properties as it bounces off various objects in the world until it finally hits the virtual camera. At least, you hope it hits the camera. You see, to be perfectly accurate, you have to cast a bajillion rays of light out from the light sources and then see which ones end up hitting the camera at some point. This is obviously a problem, because most of the rays don't actually hit the camera, and are simply wasted. Because this brute force method is so incredibly inefficient, many complex algorithms (such as photon-mapping and Metropolis light transport) have been developed to yield approximations that are thousands of times more efficient. These techniques are almost always focused on attempting to find paths from the light source to the camera, so rays can be cast in the reverse direction. Some early approximations actually cast rays out from the camera until they hit an object, then calculated the lighting information from the distance and angle, disregarding other objects in the scene. While highly efficient, this method produced extremely inaccurate results.
It is with a certain irony that raytracing is touted as being a precise, super-accurate rendering method when all raytracing is actually done via approximations in the first place. Pixar uses photon-mapping for its movies. Most raytracers operate on stochastic sampling approximations. We can already do raytracing in realtime, if we get approximate enough, it just looks boring and is extremely limited. Graphics development doesn't just stop when someone develops realtime raytracing, because there will always be room for a better approximation.
This quickly gives rise to defining photorealism as rendering a virtual scene such that it is indistinguishable from a photograph of a similar scene, even if they aren't exactly the same. This, however, raises the issue of just how indistinguishable it needs to be. This seems like a bizarre concept, but there are different degrees of "indistinguishable" due to the differences between people's observational capacities. Many people will never notice a slightly misaligned shadow or a reflection that's a tad too bright. For others, they will stand out like sore thumbs and completely destroy their suspension of disbelief.
We have yet another problem in that the entire concept of "photorealism" has nothing to do with how humans see the world in the first place. Photos are inherently linear, while human experience a much more dynamic, log-based lighting scale. This gives rise to HDR photography, which actually has almost nothing to do with the HDR implemented in games. Games simply change the brightness of the entire scene, instead of combining the brightness of multiple exposures to brighten some areas and darken others in the same photo. If all photos are not created equal, then exactly which photo are we talking about when we say "photorealistic"?
In addition, raytracing approximation algorithms almost always take advantage of rays that degrade quickly, such that they can only bounce 10-15 times before becoming irrelevant. This is fine and dandy for walking around in a city or a forest, but what about a kitchen? Even though raytracing is much better at handling reflections accurately, highly reflective materials cripple the raytracer, because now rays are bouncing hundreds of times off a myriad of surfaces instead of just 10. If not handled properly, it can absolutely devastate performance, which is catastrophic for game engines that must maintain constant render times.
This is just for common lighting phenomena! How are you going to write shaders for things like pools of magic water and birefringent calcite crystals? How about trying to accurately simulate circular polarizers when most raytracers don't even know what polarization is? Does being photorealistic require you to simulate the Tyndall Effect for caustics in crystals and particulate matter? There are so many tiny little details all around us that affect everything from the color of our iris to the creation of rainbows. Just how much does our raytracer need to simulate in order to be photorealistic?
Despite multiple attempts at leveraging procedural generation, the content problem has simply refused to go away. Until we can effectively harness the power of procedural generation, augmented artistic tools, and automatic design morphing, the advent of fully photorealistic raytracing will be useless. The best graphics engine in the world is nothing without art.
I know that raytracing is exciting, sometimes simply as a demonstration of raw computational power. But it always disheartens me when people fantasize about playing amazingly detailed games indistinguishable from real life when that simply isn't going to happen, even with the inevitable development2 of realtime raytracing. By the time it becomes commercially viable, it will simply be yet another incremental step in our eternal quest for infinite realism. It is an important step, and one we should strive for, but it alone is not sufficient to spark a revolution.
1 Found on the special features section of the How To Train Your Dragon DVD.
2 Disclaimer: I've been trying to develop an efficient raytracing algorithm for ages and haven't had much luck. These guys are faring much better.
For those unfamiliar with the term, raytracing is the process of rendering a 3D scene by tracing the path of a beam of light after it is emitted from a light source, calculating its properties as it bounces off various objects in the world until it finally hits the virtual camera. At least, you hope it hits the camera. You see, to be perfectly accurate, you have to cast a bajillion rays of light out from the light sources and then see which ones end up hitting the camera at some point. This is obviously a problem, because most of the rays don't actually hit the camera, and are simply wasted. Because this brute force method is so incredibly inefficient, many complex algorithms (such as photon-mapping and Metropolis light transport) have been developed to yield approximations that are thousands of times more efficient. These techniques are almost always focused on attempting to find paths from the light source to the camera, so rays can be cast in the reverse direction. Some early approximations actually cast rays out from the camera until they hit an object, then calculated the lighting information from the distance and angle, disregarding other objects in the scene. While highly efficient, this method produced extremely inaccurate results.
It is with a certain irony that raytracing is touted as being a precise, super-accurate rendering method when all raytracing is actually done via approximations in the first place. Pixar uses photon-mapping for its movies. Most raytracers operate on stochastic sampling approximations. We can already do raytracing in realtime, if we get approximate enough, it just looks boring and is extremely limited. Graphics development doesn't just stop when someone develops realtime raytracing, because there will always be room for a better approximation.
1. Photorealism
The meaning of photorealism is difficult to pin down, in part because the term is inherently subjective. If you define photorealism as being able to render a virtual scene such that it precisely matches a photo, then it is almost impossible to achieve in any sort of natural environment where the slightest wind can push a tree branch out of alignment.This quickly gives rise to defining photorealism as rendering a virtual scene such that it is indistinguishable from a photograph of a similar scene, even if they aren't exactly the same. This, however, raises the issue of just how indistinguishable it needs to be. This seems like a bizarre concept, but there are different degrees of "indistinguishable" due to the differences between people's observational capacities. Many people will never notice a slightly misaligned shadow or a reflection that's a tad too bright. For others, they will stand out like sore thumbs and completely destroy their suspension of disbelief.
We have yet another problem in that the entire concept of "photorealism" has nothing to do with how humans see the world in the first place. Photos are inherently linear, while human experience a much more dynamic, log-based lighting scale. This gives rise to HDR photography, which actually has almost nothing to do with the HDR implemented in games. Games simply change the brightness of the entire scene, instead of combining the brightness of multiple exposures to brighten some areas and darken others in the same photo. If all photos are not created equal, then exactly which photo are we talking about when we say "photorealistic"?
2. Complexity
Raytracing is often cited as allowing an order of magnitude more detail in models by being able to efficiently process many more polygons. This is only sort of true in that raytracing is not subject to the same computational constraints that rasterization is. Rasterization must render every single triangle in the scene, whereas raytracing is only interested in whether or not a ray hits a triangle. Unfortunately, it still has to navigate through the scene representation. Even if a raytracer could handle a scene with a billion polygons efficiently, this raises completely unrelated problems involving RAM access times and cache pollution that suddenly become actual performance bottlenecks instead of micro-optimizations.In addition, raytracing approximation algorithms almost always take advantage of rays that degrade quickly, such that they can only bounce 10-15 times before becoming irrelevant. This is fine and dandy for walking around in a city or a forest, but what about a kitchen? Even though raytracing is much better at handling reflections accurately, highly reflective materials cripple the raytracer, because now rays are bouncing hundreds of times off a myriad of surfaces instead of just 10. If not handled properly, it can absolutely devastate performance, which is catastrophic for game engines that must maintain constant render times.
3. Scale
How do you raytrace stars? Do you simply wrap a sphere around the sky and give it a "star" material? Do you make them all point sources infinitely far away? How does this work in a space game, where half the stars you see can actually be visited, and the other half are entire galaxies? How do you accurately simulate an entire solar system down to the surface of a planet, as the Kerbal Space Program developers had to? Trying to figure out how to represent that kind of information in a meaningful form with only 64 bits of precision, if you are lucky, is a problem completely separate from raytracing, yet of increasingly relevant concern as games continue to expand their horizons more and more. How do we simulate an entire galaxy? How can we maintain meaningful precision when faced with astronomical scales, and how does this factor in to our rendering pipeline? These are problems that arise in any rendering pipeline, regardless of what techniques it uses, due to fundamental limitations in our representations of numbers.4. Materials
Do you know what methane clouds look like? What about writing an aerogel shader? Raytracing, by itself, doesn't simply figure out how a given material works, you have to tell it how each material behaves, and its accuracy is wholly dependent on how accurate your description of the material is. This isn't easy, either, it requires advanced mathematical models and heaps of data collection. In many places we're actually still trying to figure out how to build physically correct material equations in the first place. Did you know that Dreamworks had to rewrite part of their cloud shader1 for How To Train Your Dragon? It turns out that getting clouds to look good when your character is flying directly beneath them with a hand raised is really hard.This is just for common lighting phenomena! How are you going to write shaders for things like pools of magic water and birefringent calcite crystals? How about trying to accurately simulate circular polarizers when most raytracers don't even know what polarization is? Does being photorealistic require you to simulate the Tyndall Effect for caustics in crystals and particulate matter? There are so many tiny little details all around us that affect everything from the color of our iris to the creation of rainbows. Just how much does our raytracer need to simulate in order to be photorealistic?
5. Physics
What if we ignored the first four problems and simply assumed we had managed to make a perfect, magical photorealistic raytracer. Congratulations, you've managed to co-opt the entirety of your CPU for the task of rendering a static 3D scene, leaving nothing left for the physics. All we've managed to accomplish is taking the "interactive" out of "interactive media". Being able to influence the world around us is a key ingredient to immersion in games, and this requires more and more accurate physics, which are arguably just as difficult to calculate as raytracing is. The most advanced real-time physics engine to-date is the Lagoa Multiphysics, and it can only just barely simulate a tiny scene in a well-controlled environment before it completely decimates a modern CPU. This is without any complex rendering at all. Now try doing that for a scene with a radius of several miles. Oh, and remember our issue with scaling? This applies to physics too! Except with physics, its an order of magnitude even more difficult.6. Content
As many developers have been discovering, procedural generation is not magic pixie dust you can sprinkle on problems to make them go away. Yet, without advances in content generation, we are forced to hire armies of artists to create the absurd amounts of detail required by modern games. Raytracing doesn't solve this problem, it makes it worse. In any given square mile of a human settlement, there are billions of individual objects, ranging from pine cones, to rocks, to TV sets, to crumbs, all of which technically have physics, and must be kept track of, and rendered, and even more importantly, modeled.Despite multiple attempts at leveraging procedural generation, the content problem has simply refused to go away. Until we can effectively harness the power of procedural generation, augmented artistic tools, and automatic design morphing, the advent of fully photorealistic raytracing will be useless. The best graphics engine in the world is nothing without art.
7. AI
<Patrician|Away> what does your robot do, samOf course, while we're busy desperately trying to raytrace supercomplex scenes with advanced physics, we haven't even left any CPU time to calculate the AI! The AI in games is so consistently terrible its turned into its own trope. The game industry spends all its computational time trying to render a scene, leaving almost nothing left for the AI routines, forcing them to rely on techniques from 1968. Think about that - we are approaching the point where AI in games comes down to a 50-year old technique that was considered hopelessly outdated before I was even born. Oh, and I should also point out that Graphics, Physics, Art, and AI are all completely separate fields with fundamentally different requirements that all have to work together in a coherent manner just so you can shoot headshots in Call of Duty 22.
<bovril> it collects data about the surrounding environment, then discards it and drives into walls
— Bash.org quote #240849
I know that raytracing is exciting, sometimes simply as a demonstration of raw computational power. But it always disheartens me when people fantasize about playing amazingly detailed games indistinguishable from real life when that simply isn't going to happen, even with the inevitable development2 of realtime raytracing. By the time it becomes commercially viable, it will simply be yet another incremental step in our eternal quest for infinite realism. It is an important step, and one we should strive for, but it alone is not sufficient to spark a revolution.
1 Found on the special features section of the How To Train Your Dragon DVD.
2 Disclaimer: I've been trying to develop an efficient raytracing algorithm for ages and haven't had much luck. These guys are faring much better.
September 23, 2012
Teenage Rebellion as a Failure of Society
Historians have noticed that the concept of teenage rebellion is a modern invention. Young adults often (but not always) have a tendency to be horny and impulsive, but the flagrant and sometimes violent rejection of authority associated with teenagers is a stereotype unique to modern culture. Many adults incorrectly assume this means we have gotten "too soft" and need to bring back spanking, paddles, and other harsher methods of punishment. As any respectable young adult will tell you, that isn't the answer, and in fact highlights the underlying issue of ageism that is creating an aloof, frustrated, and repressed youth.
The problem is that adults refuse to take children seriously. Until puberty, kids are often aware of this, but most simply don't care (and sometimes take advantage of it). As they develop into young adults, however, this begins to clash with their own aspirations. They want to be in control of their own lives, because they're trying to figure out what they want their lives to be. They want to explore the world and decide where to stand and what to avoid. Instead, they are locked inside a school for 6-7 hours and spoon-fed buckets of irrelevant information, which they must then regurgitate on a battery of tests that have no relation to reality. They are not given meaningful opportunities to prove themselves as functional members of society. Instead, they are explicitly forbidden from participating in the adult world until the arbitrary age of 18, regardless of how mature or immature they are. They are told that they can't be an adult not because of their behavior, but simply because they aren't old enough. The high school dropout and the valedictorian get to be adults at exactly the same time - 18.
Our refusal to let young adults prove how mature they can be is doubly ironic in the context of a faltering global economy in desperate need of innovative new technologies to create jobs. Teenagers are unrestricted by concepts of impossibility, and free from the consequences of failed experiments. They don't have to worry about acquiring government funding or getting published in a peer-reviewed journal. They just want to make cool things, and that is exactly what we need. So obviously, to improve student performance in schools, our politicians tie school funding to test scores. You can't legislate innovation, you can only inspire it. Filling in those stupid scantron forms is not conducive to creative thinking. Our hyper-emphasis on test scores has succeeded only in ensuring that the only students who get into college are ones that are good at taking tests, not inventing things.
Young adults are entirely capable of being mature, responsible members of society if we just give them the chance to be adults instead of using a impartial age barrier that serves only to segregate them from the rest of society. They are doomed to be treated as second-class citizens not because they are behaving badly, but because they aren't old enough. Physical labor and repetitive jobs are being replaced by automated machines, and these jobs aren't coming back. The new economy isn't run by office drones that follow instructions like robots, but by technological pioneers that change the world. You can't institutionalize creativity, or put it on a test. You can't measure imagination or grade ingenuity.
So what do we do? We cut funding for creative art programs and increase standardized testing. Our attempts to save our educational system are only ensuring its imminent demise as it prepares kids to live in a world that no longer exists.
The most valuable commodity in this new economy will be your imagination - the one thing computers can't do. Maybe if we actually treated young adults like real people, their creativity could become the driving force of economic prosperity.
The problem is that adults refuse to take children seriously. Until puberty, kids are often aware of this, but most simply don't care (and sometimes take advantage of it). As they develop into young adults, however, this begins to clash with their own aspirations. They want to be in control of their own lives, because they're trying to figure out what they want their lives to be. They want to explore the world and decide where to stand and what to avoid. Instead, they are locked inside a school for 6-7 hours and spoon-fed buckets of irrelevant information, which they must then regurgitate on a battery of tests that have no relation to reality. They are not given meaningful opportunities to prove themselves as functional members of society. Instead, they are explicitly forbidden from participating in the adult world until the arbitrary age of 18, regardless of how mature or immature they are. They are told that they can't be an adult not because of their behavior, but simply because they aren't old enough. The high school dropout and the valedictorian get to be adults at exactly the same time - 18.
Our refusal to let young adults prove how mature they can be is doubly ironic in the context of a faltering global economy in desperate need of innovative new technologies to create jobs. Teenagers are unrestricted by concepts of impossibility, and free from the consequences of failed experiments. They don't have to worry about acquiring government funding or getting published in a peer-reviewed journal. They just want to make cool things, and that is exactly what we need. So obviously, to improve student performance in schools, our politicians tie school funding to test scores. You can't legislate innovation, you can only inspire it. Filling in those stupid scantron forms is not conducive to creative thinking. Our hyper-emphasis on test scores has succeeded only in ensuring that the only students who get into college are ones that are good at taking tests, not inventing things.
Young adults are entirely capable of being mature, responsible members of society if we just give them the chance to be adults instead of using a impartial age barrier that serves only to segregate them from the rest of society. They are doomed to be treated as second-class citizens not because they are behaving badly, but because they aren't old enough. Physical labor and repetitive jobs are being replaced by automated machines, and these jobs aren't coming back. The new economy isn't run by office drones that follow instructions like robots, but by technological pioneers that change the world. You can't institutionalize creativity, or put it on a test. You can't measure imagination or grade ingenuity.
So what do we do? We cut funding for creative art programs and increase standardized testing. Our attempts to save our educational system are only ensuring its imminent demise as it prepares kids to live in a world that no longer exists.
The most valuable commodity in this new economy will be your imagination - the one thing computers can't do. Maybe if we actually treated young adults like real people, their creativity could become the driving force of economic prosperity.
September 19, 2012
Analyzing XKCD: Click and Drag
Today, xkcd featured a comic with a comically large image that is navigated by clicking and dragging. In the interests of SCIENCE (and possibly accidentally DDoSing Randall's image server - sorry!), I created a static HTML file of the entire composite image.1
The collage is made up of 225 images2 that stretch out over a total image area 79872 pixels high and 165888 pixels wide. The images take up 5.52 MB of space and are named with a simple naming scheme
Assuming a human's average height is 1.8 meters, that would give this image a scale of about 1 meter per 22 pixels. That means the total composite image is approximately 3.63 kilometers high and 7.54 kilometers wide. It would take an average human 1.67 hours to walk from one end of the image to the other. Note that the characters at the far left say they've been walking for 2 miles - they are 67584 pixels from the starting point, which translates to 3.072 km or ~1.9 miles, so this seems to indicate my rough estimates here are reasonably accurate.
If Randall spent, on average, one hour drawing each frame, it would take him 9.375 days of constant, nonstop work to finish this. If he instead spent an average of 10 minutes per frame, it would take ~37.5 hours, or almost an entire 40-hour work week.
Basically I'm saying Randall Munroe is fucking insane.
1 If you are on firefox or chrome, right-clicking and selecting "Save as" will download the HTML file along with all 225 images into a separate folder.
2 There are actually 3159 possible images (39 x 81), but all-white and all-black images are not included, instead being replaced by either the default white background or a massive black <div> representing the ground, located 28672 pixels from the top of the image, with a height of 51200.
The collage is made up of 225 images2 that stretch out over a total image area 79872 pixels high and 165888 pixels wide. The images take up 5.52 MB of space and are named with a simple naming scheme
"ydxd.png"
where d represents a cardinal direction appropriate for the axis (n for north, s for south on the y axis and e for east, w for west on the x axis) along with the tile coordinate number; for example, "1n1e.png"
. Tiles are 2048x2048 png images with an average size of 24.53 KB. If you were to try and represent this as a single, uncompressed 32-bit 79872x165888 image file, it would take up 52.99 GB of space.Assuming a human's average height is 1.8 meters, that would give this image a scale of about 1 meter per 22 pixels. That means the total composite image is approximately 3.63 kilometers high and 7.54 kilometers wide. It would take an average human 1.67 hours to walk from one end of the image to the other. Note that the characters at the far left say they've been walking for 2 miles - they are 67584 pixels from the starting point, which translates to 3.072 km or ~1.9 miles, so this seems to indicate my rough estimates here are reasonably accurate.
If Randall spent, on average, one hour drawing each frame, it would take him 9.375 days of constant, nonstop work to finish this. If he instead spent an average of 10 minutes per frame, it would take ~37.5 hours, or almost an entire 40-hour work week.
Basically I'm saying Randall Munroe is fucking insane.
1 If you are on firefox or chrome, right-clicking and selecting "Save as" will download the HTML file along with all 225 images into a separate folder.
2 There are actually 3159 possible images (39 x 81), but all-white and all-black images are not included, instead being replaced by either the default white background or a massive black <div> representing the ground, located 28672 pixels from the top of the image, with a height of 51200.
August 22, 2012
What Is A Right Answer?
I find that modern culture is often obsessed with a concept of wrongness. It is a tendency to paint things in a black and white fashion, as if there are simply wrong answers and right answers and nothing in-between. While I have seen this in every single imaginable discipline (including art and music, which is particularly disturbing), it is most obvious to me in the realm of programming.
When people aren't making astonishingly over-generalized statements like trying to say one programming language is better than another without context, we often try to find the "best" way to do something. The problem is that we don't often bother to think about exactly what makes the best answer the best answer. Does it have to be fast? If speed was the only thing that was important, we'd write everything in assembly. Does it have to be simple? I could list a thousand instances were simplicity fails to account for edge-cases that render the code useless. Does it have to be easy to understand? If you want something to be easy to understand, then the entire C standard library is one giant wrong answer that's being relied upon by virtually every single program in the entire world.
For a concept taken for granted by most programmers, defining what exactly makes an implementation "optimal" is incredibly difficult. A frightening number of programmers are also incapable of realizing this, and continue to hang on to basic assumptions that one would think should hold everywhere, when very few of them actually do. Things like "the program should not crash" seem reasonable, but what if you want to ensure that a safety feature crashed the program instead of corrupting the system?
The knee-jerk reaction to this is "Oh yeah, except for that." This phrase seems to underlie many of the schisms in the programming community. Virtually every single assumption that could be held by a programmer will be wrong somewhere. I regularly encounter programmers who think you should do something a specific way no matter what, until you ask them about making a kernel. "Oh yeah, except for that." Or a sound processing library. "Oh yeah, except for that." Or a rover on mars. Or a video decoder. Or a raytracer. Or a driver. Or a compiler. Or a robot. Or scientific computing.
All these except-for-that's betray the fundamental failure of modern programming culture: There is no right answer. The entire concept of Right and Wrong does not belong in programming, because you are trying to find your way to a solution and there are billions of ways to get there, and the one that works best for your particular situation depends on hundreds of thousands of independent variables. Yet, there is a "right answer" on Stack Overflow. There are books on writing "proper code". There are "best practices". People talk about programming solutions that are "more right" than others. There are black and white, right and wrong, yes or no questions pervading the consciousness of the majority of programmers, who foolishly think that you can actually reduce an engineering problem into a mathematical one, despite overwhelming evidence that you simply cannot escape the clutches of reality and how long it takes an electron to reach the other side of a silicon wafer.
If you ask someone how to do something the right way, you are asking the wrong question. You should be asking them how to solve your problem. You didn't do something the wrong way, you simply solved the wrong problem.
When people aren't making astonishingly over-generalized statements like trying to say one programming language is better than another without context, we often try to find the "best" way to do something. The problem is that we don't often bother to think about exactly what makes the best answer the best answer. Does it have to be fast? If speed was the only thing that was important, we'd write everything in assembly. Does it have to be simple? I could list a thousand instances were simplicity fails to account for edge-cases that render the code useless. Does it have to be easy to understand? If you want something to be easy to understand, then the entire C standard library is one giant wrong answer that's being relied upon by virtually every single program in the entire world.
For a concept taken for granted by most programmers, defining what exactly makes an implementation "optimal" is incredibly difficult. A frightening number of programmers are also incapable of realizing this, and continue to hang on to basic assumptions that one would think should hold everywhere, when very few of them actually do. Things like "the program should not crash" seem reasonable, but what if you want to ensure that a safety feature crashed the program instead of corrupting the system?
The knee-jerk reaction to this is "Oh yeah, except for that." This phrase seems to underlie many of the schisms in the programming community. Virtually every single assumption that could be held by a programmer will be wrong somewhere. I regularly encounter programmers who think you should do something a specific way no matter what, until you ask them about making a kernel. "Oh yeah, except for that." Or a sound processing library. "Oh yeah, except for that." Or a rover on mars. Or a video decoder. Or a raytracer. Or a driver. Or a compiler. Or a robot. Or scientific computing.
All these except-for-that's betray the fundamental failure of modern programming culture: There is no right answer. The entire concept of Right and Wrong does not belong in programming, because you are trying to find your way to a solution and there are billions of ways to get there, and the one that works best for your particular situation depends on hundreds of thousands of independent variables. Yet, there is a "right answer" on Stack Overflow. There are books on writing "proper code". There are "best practices". People talk about programming solutions that are "more right" than others. There are black and white, right and wrong, yes or no questions pervading the consciousness of the majority of programmers, who foolishly think that you can actually reduce an engineering problem into a mathematical one, despite overwhelming evidence that you simply cannot escape the clutches of reality and how long it takes an electron to reach the other side of a silicon wafer.
If you ask someone how to do something the right way, you are asking the wrong question. You should be asking them how to solve your problem. You didn't do something the wrong way, you simply solved the wrong problem.
August 19, 2012
An Artist Trapped Inside A Software Engineer
Almost a decade ago, I thought I wanted to make games. I began building a graphics engine for that purpose, since back then, there were almost no open-source 2D graphics engines using 3D acceleration. It wasn't until later that I discovered I liked building the graphics engine more than I liked building games.
Times have changed, but I continue to tinker away on my graphics engine while going to college and learning just how dumb the rest of the world is. In the most recent bout of astonishing stupidity, my country has decided it doesn't recognize political asylum for people it doesn't like. It wasn't until reality had begun a full-scale assault on my creativity and imagination that I truly understood why artists feel compelled to lose themselves in their imaginations.
My imagination. It is something I could not possibly describe in any meaningful way. Art exists because some things can't be described, they must be shown. And yet, few things in my imagination are my own. I hunt down talented artists and visionaries, lose myself in the worlds they constructed, then take everything out of context and reconstruct my own worlds, perhaps based on another artist's vision, using the same concepts. I construct multiple visualizations, art styles, and game elements. My mental stage is fueled by awesome music, music that launches my imagination into incredible creative sprees. Sometimes I craft incredible melodies of my own, but rarely are they ever truly expressed in any satisfactory way in my music.
My life is one of creative frustration. I became obsessed with computer graphics as a way to realize my vision, but I wasn't interested in simply learning how to 3D model (which I happen to be terrible at, like everything else). I don't see the world as CGI, I see the world through the lens of a GPU. I look at things and ask, how might I render that? My imagination is not a static picture or movie, its a world that is meant to be explored. Sometimes I play games for the storyline, or the gameplay, but the one thing that has always grabbed me is the ability to explore. I played Freelancer for 5 years, installed hundreds of mods, and was constantly enthralled simply by the exploration, the enormous universe, finding new systems, and discovering new places.
I can't draw a leaf. But I can create a mathematical model of it. I can calculate the textures and patterns, the branching veins and how each has their own specular, diffuse and transfer lighting functions. I can build abstractions and simulations, genetic recombinations and simplex noise algorithms. After I build tools to procedurally generate all the elements of a world, maybe then I can bring my imagination to life. But then, it's not really my imagination, it's what other artists inspire in me. I want to get as close to an artistic vision as possible, and beyond. I want to expand their artistic ideas and make them into something that is truly beautiful and inspiring, a clear extension of their vision, where it's soul shines like a beacon instead of being buried under bureaucratic bullshit.
I am an artist who cannot draw. I'm a musician incapable of painting the sonic landscape of my imagination. I am a dreamer who has no dreams of his own. If I am a programmer, it is because programming is the only way for me to express my creativity. But programming itself is not simply a means to an end. Programming is my paintbrush, my canvas, and my palette. I know how to read x86 assembly. I have abused C++11 lambdas to create temporary closures that hold a mutable state. I've crafted architectures and APIs and object-inheritance schemes and functional undo/redo stacks and lockless queues and kd-trees. Programming is my art and my music, and every new language or algorithm I explore is another instrument for me to use when building my symphony.
Yet, many programmers hold little respect for alternative opinions. People who don't conform to strict guidelines are viewed as either terrible programmers or "cowboy" programmers destined to bring ruin to every project they touch. Everything must follow protocol, everyone must do things this way or that way. Instead of celebrating our diversity in programming languages, we viciously attack each other for using a "terrible language". Perhaps I have simply been inside a strange anomaly where everyone is obsessed with corporate practices and coding standards instead of building things.
Or perhaps I'm an artist trapped inside a software engineer.
Times have changed, but I continue to tinker away on my graphics engine while going to college and learning just how dumb the rest of the world is. In the most recent bout of astonishing stupidity, my country has decided it doesn't recognize political asylum for people it doesn't like. It wasn't until reality had begun a full-scale assault on my creativity and imagination that I truly understood why artists feel compelled to lose themselves in their imaginations.
My imagination. It is something I could not possibly describe in any meaningful way. Art exists because some things can't be described, they must be shown. And yet, few things in my imagination are my own. I hunt down talented artists and visionaries, lose myself in the worlds they constructed, then take everything out of context and reconstruct my own worlds, perhaps based on another artist's vision, using the same concepts. I construct multiple visualizations, art styles, and game elements. My mental stage is fueled by awesome music, music that launches my imagination into incredible creative sprees. Sometimes I craft incredible melodies of my own, but rarely are they ever truly expressed in any satisfactory way in my music.
My life is one of creative frustration. I became obsessed with computer graphics as a way to realize my vision, but I wasn't interested in simply learning how to 3D model (which I happen to be terrible at, like everything else). I don't see the world as CGI, I see the world through the lens of a GPU. I look at things and ask, how might I render that? My imagination is not a static picture or movie, its a world that is meant to be explored. Sometimes I play games for the storyline, or the gameplay, but the one thing that has always grabbed me is the ability to explore. I played Freelancer for 5 years, installed hundreds of mods, and was constantly enthralled simply by the exploration, the enormous universe, finding new systems, and discovering new places.
I can't draw a leaf. But I can create a mathematical model of it. I can calculate the textures and patterns, the branching veins and how each has their own specular, diffuse and transfer lighting functions. I can build abstractions and simulations, genetic recombinations and simplex noise algorithms. After I build tools to procedurally generate all the elements of a world, maybe then I can bring my imagination to life. But then, it's not really my imagination, it's what other artists inspire in me. I want to get as close to an artistic vision as possible, and beyond. I want to expand their artistic ideas and make them into something that is truly beautiful and inspiring, a clear extension of their vision, where it's soul shines like a beacon instead of being buried under bureaucratic bullshit.
I am an artist who cannot draw. I'm a musician incapable of painting the sonic landscape of my imagination. I am a dreamer who has no dreams of his own. If I am a programmer, it is because programming is the only way for me to express my creativity. But programming itself is not simply a means to an end. Programming is my paintbrush, my canvas, and my palette. I know how to read x86 assembly. I have abused C++11 lambdas to create temporary closures that hold a mutable state. I've crafted architectures and APIs and object-inheritance schemes and functional undo/redo stacks and lockless queues and kd-trees. Programming is my art and my music, and every new language or algorithm I explore is another instrument for me to use when building my symphony.
Yet, many programmers hold little respect for alternative opinions. People who don't conform to strict guidelines are viewed as either terrible programmers or "cowboy" programmers destined to bring ruin to every project they touch. Everything must follow protocol, everyone must do things this way or that way. Instead of celebrating our diversity in programming languages, we viciously attack each other for using a "terrible language". Perhaps I have simply been inside a strange anomaly where everyone is obsessed with corporate practices and coding standards instead of building things.
Or perhaps I'm an artist trapped inside a software engineer.
July 25, 2012
Coordinate Systems And Cascading Stupidity
Today I learned that there are way too many coordinate systems, and that I'm an idiot (but that was already well-established). I have also learned to not trust graphics tutorials, but the reasons for that won't become apparent until the end of this article.
There are two types of coordinate systems: left-handed and right-handed coordinate systems. By convention, most everyone in math and science uses right-handed coordinate systems with positive x going to the right, positive y going up, and positive z coming out of the screen. A left-handed coordinate system is the same, but positive z instead points into the screen. Of course, there are many other possible coordinate system configurations, each either being right or left-handed; some modern CAD packages have y pointing into the screen and z pointing up, and screen-space in graphics traditionally has y pointing down and z pointing into the screen.
If you start digging through DirectX and OpenGL, the handedness of the coordinate systems being used are ill-defined due to its reliance on various perspective transforms. Consequently, while DirectX traditionally uses a left-handed coordinate system and OpenGL uses a right-handed coordinate system, you can simply use
I discovered all this, because today I found out that, for the past 6 or so years (the entire time my graphics engine has ever existed in any shape or form), it has been rotating everything backwards. I didn't notice.
This happened due to a number of unfortunate coincidences. For many years, I simply didn't notice because I didn't know what direction the sign of a given rotation was supposed to rotate in, and even if I did I would have assumed this to be the default for graphics for some strange reason (there are a lot of weird things in graphics). The first hint was when I was integrating with Box2D and I had to reverse the rotation of its bodies to match up with my images. This did trigger an investigation, but I mistakenly concluded that it was Box2D that had it wrong, not me, because I was using
Now, here you have to understand that I was currently using a standard left-handed coordinate system, with y pointing up, x pointing right and z into the screen. The thing is, I wanted a coordinate system where y pointed down, and so I did as a tutorial instructed me to and reversed all of my y coordinates on the low-level drawing functions.
So, when
You see, by inverting those y coordinates, I was accidentally reversing the result of my rotation matrices, which caused them to rotate everything backwards. This was further complicated by how the camera rotates things - if your camera is fixed, how do you make it appear that it is rotating? You rotate everything else in the opposite direction! Hence even though my camera was rotating backwards despite looking like it was rotating forwards, it was actually being rotated the right way for the wrong reason.
While I initially thought the fix for this would require some crazy coordinate system juggling, the actual solution was fairly simple. The fact was, a coordinate system with z pointing into the screen and y pointing down is still right-handed, which means it should play nicely with rotations from a traditional right-handed system. Since the handedness of a coordinate system is largely determined by the perspective matrix, reversing y-coordinates in the drawing functions was actually reversing them too late in the pipeline. Hence, because I used
What's truly terrifying that all of this was indirectly caused by reversing the y coordinates in the first place. Had I instead flipped them in the perspective matrix itself (or otherwise properly transformed the coordinate system), I never would have had to deal with negating y coordinates, I never would have mistaken
All because of that one stupid tutorial.
P.S. the moral of the story isn't that tutorials are bad, it's that you shouldn't be a stupid dumbass and not write unit tests or look at function definitions.
There are two types of coordinate systems: left-handed and right-handed coordinate systems. By convention, most everyone in math and science uses right-handed coordinate systems with positive x going to the right, positive y going up, and positive z coming out of the screen. A left-handed coordinate system is the same, but positive z instead points into the screen. Of course, there are many other possible coordinate system configurations, each either being right or left-handed; some modern CAD packages have y pointing into the screen and z pointing up, and screen-space in graphics traditionally has y pointing down and z pointing into the screen.
If you start digging through DirectX and OpenGL, the handedness of the coordinate systems being used are ill-defined due to its reliance on various perspective transforms. Consequently, while DirectX traditionally uses a left-handed coordinate system and OpenGL uses a right-handed coordinate system, you can simply use
D3DPerspectiveMatrixRH
to give DirectX a right-handed coordinate system, and openGL actually uses a left-handed coordinate system by default on its shader pipeline - but all of these are entirely dependent on the handedness of the projection matrices involved. So, technically the coordinate system is whichever one you choose, but unlike the rest of the world, computer graphics has no real standard on which coordinate system to use, and so its just a giant mess of various coordinate systems all over the place, which means you don't know what handedness a given function is for until things start getting funky.I discovered all this, because today I found out that, for the past 6 or so years (the entire time my graphics engine has ever existed in any shape or form), it has been rotating everything backwards. I didn't notice.
This happened due to a number of unfortunate coincidences. For many years, I simply didn't notice because I didn't know what direction the sign of a given rotation was supposed to rotate in, and even if I did I would have assumed this to be the default for graphics for some strange reason (there are a lot of weird things in graphics). The first hint was when I was integrating with Box2D and I had to reverse the rotation of its bodies to match up with my images. This did trigger an investigation, but I mistakenly concluded that it was Box2D that had it wrong, not me, because I was using
atan2
to check coordinates, and I was passing them in as atan2(v.x,v.y)
. The problem is that atan2
is defined as float atan2(float y, float x)
, which means my coordinates were reversed and I was getting nonsense angles.Now, here you have to understand that I was currently using a standard left-handed coordinate system, with y pointing up, x pointing right and z into the screen. The thing is, I wanted a coordinate system where y pointed down, and so I did as a tutorial instructed me to and reversed all of my y coordinates on the low-level drawing functions.
So, when
atan2(x,y)
gave me bad results, I mistakenly thought "Oh, i forgot to reverse the y coordinate!" Suddenly atan2(x,-y)
was giving me angles that matched what my images were doing. The thing is, if you switch x and y and negate y, atan2(x,-y)==-atan2(y,x)
. One mistake had been incorrectly validated by yet another mistake, caused by yet another mistake!You see, by inverting those y coordinates, I was accidentally reversing the result of my rotation matrices, which caused them to rotate everything backwards. This was further complicated by how the camera rotates things - if your camera is fixed, how do you make it appear that it is rotating? You rotate everything else in the opposite direction! Hence even though my camera was rotating backwards despite looking like it was rotating forwards, it was actually being rotated the right way for the wrong reason.
While I initially thought the fix for this would require some crazy coordinate system juggling, the actual solution was fairly simple. The fact was, a coordinate system with z pointing into the screen and y pointing down is still right-handed, which means it should play nicely with rotations from a traditional right-handed system. Since the handedness of a coordinate system is largely determined by the perspective matrix, reversing y-coordinates in the drawing functions was actually reversing them too late in the pipeline. Hence, because I used
D3DXMatrixPerspectiveLH
, I had a left-handed coordinate system, and my rotations ended up being reversed. D3DXMatrixPerspectiveRH
negates the z-coordinate to switch the handedness of the coordinate system, but I like positive z pointing into the screen, so I instead hacked the left-handed perspective matrix itself and negated the y-scaling parameter in cell [2,2], then undid all the y-coordinate inversion insanity that had been inside my drawing functions (you could also negate the y coordinate in any world transform matrix sufficiently early in the pipeline by specifying a negative y scaling in [2,2]). Suddenly everything was consistent, and rotations were happening in the right direction again. Now the Camera rotation actually required the negative rotation, as one would expect, and I still got to use a coordinate system with y pointing down. Unfortunately it also reversed several rotation operations throughout the engine, some of which were functions that had been returning the wrong value this whole time so as to match up with the incorrect rotation of the engine - something that will give me nightmares for weeks, probably involving a crazed rabbit hitting me over the head with a carrot screaming "STUPID STUPID STUPID STUPID!"What's truly terrifying that all of this was indirectly caused by reversing the y coordinates in the first place. Had I instead flipped them in the perspective matrix itself (or otherwise properly transformed the coordinate system), I never would have had to deal with negating y coordinates, I never would have mistaken
atan2(x,-y)
as being valid, and I never would have had rotational issues in the first place. All because of that one stupid tutorial.
P.S. the moral of the story isn't that tutorials are bad, it's that you shouldn't be a stupid dumbass and not write unit tests or look at function definitions.
June 22, 2012
How Joysticks Ruined My Graphics Engine
It's almost a tradition.
Every time my graphics engine has been stuck in maintenence mode for 6 months, I'll suddenly realize I need to push out an update or implement some new feature. I then realize that I haven't actually paid attention to any of my testing programs, or their speed, in months. This is followed by panic, as I discover my engine running at half speed, or worse. Having made an infinite number of tiny tweaks that all could have caused the problem, I am often thrown into temporary despair only to almost immediately find some incredibly stupid mistake that was causing it. One time it was because I left the profiler on. Another time it was caused by calling the Render function twice. I'm gearing up to release the first public beta of my graphics engine, and this time is no different.
I find an old backup distribution of my graphics engine and run the tests, and my engine is running at 1100 FPS instead of 3000 or 4000 like it should be. Even the stress test is going at only ~110 FPS instead of 130 FPS. The strange part was that for the lightweight tests, it seemed to be hitting a wall at about 1100 FPS, whereas normally it hits a wall around 4000-5000 due to CPU⇒GPU bottlenecks. This is usually caused by some kind of debugging, so I thought I'd left the profiler on again, but no, it was off. After stepping through the rendering pipeline and finding nothing, I knew I had no chance of just guessing what the problem was, so I turned on the profiler and checked the results. Everything seemed relatively normal except-
Of course, if you aren't developing a graphics engine, consider that 1/60 of a second is 16666 µs - 550 µs leaves a lot of breathing room for a game to work with, but a graphics engine must not force any unnecessary cost on to a program that is using it unless that program explicitly allows it, hence the problem.
Then again, calculating invsqrt(x)*x is faster than sqrt(x), so I guess anything is possible.
Every time my graphics engine has been stuck in maintenence mode for 6 months, I'll suddenly realize I need to push out an update or implement some new feature. I then realize that I haven't actually paid attention to any of my testing programs, or their speed, in months. This is followed by panic, as I discover my engine running at half speed, or worse. Having made an infinite number of tiny tweaks that all could have caused the problem, I am often thrown into temporary despair only to almost immediately find some incredibly stupid mistake that was causing it. One time it was because I left the profiler on. Another time it was caused by calling the Render function twice. I'm gearing up to release the first public beta of my graphics engine, and this time is no different.
I find an old backup distribution of my graphics engine and run the tests, and my engine is running at 1100 FPS instead of 3000 or 4000 like it should be. Even the stress test is going at only ~110 FPS instead of 130 FPS. The strange part was that for the lightweight tests, it seemed to be hitting a wall at about 1100 FPS, whereas normally it hits a wall around 4000-5000 due to CPU⇒GPU bottlenecks. This is usually caused by some kind of debugging, so I thought I'd left the profiler on again, but no, it was off. After stepping through the rendering pipeline and finding nothing, I knew I had no chance of just guessing what the problem was, so I turned on the profiler and checked the results. Everything seemed relatively normal except-
PlaneShader::cEngine::_callpresent 1.0 144.905 us 19% 145 us 19% PlaneShader::cEngine::Update 1.0 12.334 us 2% 561 us 72% PlaneShader::cEngine::FlushMessages 1.0 546.079 us 70% 549 us 71%What the FUCK?! Why is 70% of my time being spent in
FlushMessages()
? All that does is process window messages! It shouldn't take any time at all, and here it is taking longer to process messages than it does to render an entire frame!
bool cEngine::FlushMessages() { PROFILE_FUNC(); _exactmousecalc(); //windows stuff MSG msg; while(PeekMessageW(&msg, NULL, 0, 0, PM_REMOVE)) { TranslateMessage(&msg); DispatchMessageW(&msg); if(msg.message == WM_QUIT) { _quit = true; return false; } } _joyupdateall(); return !_quit; //function returns opposite of quit }Bringing up the function, there don't seem to be many opportunities for it to fail. I go ahead and comment out
_exactmousecalc()
and _joyupdateall()
, wondering if, perhaps, something in the joystick function was messing up? Lo and behold, my speeds are back to normal! After re-inserting the exact mouse calculation, it is, in fact, _joyupdateall()
causing the problem. This is the start of the function:
void cInputManager::_joyupdateall() { JOYINFOEX info; info.dwSize=sizeof(JOYINFOEX); info.dwFlags= [ Clipped... ]; for(unsigned short i = 0; i < _maxjoy; ++i) { if(joyGetPosEx(i,&info)==JOYERR_NOERROR) { if(_allbuttons[i]!=info.dwButtons) {Well, shit, there isn't really any possible explanation here other than something going wrong with
joyGetPosEx
. It turns out that calling joyGetPosEx
when there isn't a joystick plugged in takes a whopping 34.13 µs (microseconds) on average, which is almost as long as it takes me to render a single frame (43.868 µs). There's probably a good reason for this, but evidently it is not good practice to call it unnecessarily. Fixing this was relatively easy - just force an explicit call to look for active joystick inputs and only poll those, but its still one of the weirdest performance bottlenecks I've come up against.
Of course, if you aren't developing a graphics engine, consider that 1/60 of a second is 16666 µs - 550 µs leaves a lot of breathing room for a game to work with, but a graphics engine must not force any unnecessary cost on to a program that is using it unless that program explicitly allows it, hence the problem.
Then again, calculating invsqrt(x)*x is faster than sqrt(x), so I guess anything is possible.
May 29, 2012
Answers To All The Questions I Asked As A Kid
When I was growing up and trying to figure out what was going on in this crazy planet I was born on, there were several important questions I asked that took many, many years to find answers to. What was frustrating is that almost every single answer was usually extremely obvious once I could just find someone who actually knew what they were talking about. So here are the answers to several simple life questions that always bugged me as a kid, based on my own life experiences. Many people will probably disagree with one or two things here, which is fine, because I don't care.
Usually smart people figured all the generalizations out for you, so in some cases you can simply memorize a formula or look it up, but its much easier to simply remember the rules that the smart people figured out for you so you can rederive everything you need without having to memorize it. When you understand the rules (which are all carefully constructed to be consistent with each other), you can then use Mathematics as a language to express problems in. By abstracting the problem into mathematics, the answer becomes much easier to obtain. The only thing math does is make thing easier to do by abstracting them. That's all.
Great, that was the easy part. But how do you make techno music? How do you record things? How does it get on a computer? All I have is this stupid electric piano I can record things off of, there has to be a better way! The answer is DAWs and VSTi, or Digital Audio Workstations and their virtual instrument plugins. A great DAW to start with is FL Studio, and there are a lot of free VSTi plugins floating around. VSTi plugins are simply synths or effects or other tools that you drop into your DAW and use to play notes or modify the sound. If you want natural sounding instruments, use samples. Soundfonts are widely supported, have an extension .sf2 and there are gigabytes upon gigabytes of free samples everywhere. You should try to hunt down an independent artist whose music you like, they'll often be willing to give on advice on how they create their style.
But now I've made a song, where do I post it? Soundcloud, newgrounds, last.fm, and bandcamp lets you sell it for moneys. Don't worry if you're terrible, just keep doing it over and over and over and paying attention to advice and constructive criticism.
Do something that matters to you, and know this: Life isn't fair, so you have to make it fair. You have to do things the hard way. You have to fail miserably hundreds of times and keep on trying because you aren't going to let life win. You have to do what matters to you, no matter what anyone else thinks. You have to fight for it, and you have from now until you die to win. Go.
2. If you like programming, bury yourself in it. It is, by far, the most useful skill you can have right now.
3. Read the instructions.
4. Bunnies make everything better =:3
1. What is the purpose of Math?
Math is simply repeated abstraction and generalization. All of it. Every single formula and rule in mathematics is derived by generalizing a simpler concept, all the way down to the axioms of set theory. It was invented to make our lives easier by abstracting away annoying things. Why count from 2 to 6 when you can simply add 4? Adding is simply repeated counting, after all. But then, if you can add 4 to 2 and get 6, you should be able to generalize it so you can add 4 to 2.5 and get 6.5, and what about adding 2 to 4 over and over and over? Well that's just multiplication. What about multiplying 2 over and over and over? Well that's just exponentiation. What's that funky Gamma Function I see every now and then? That's simply the factorial ($5! = 5\cdot 4\cdot 3\cdot 2\cdot 1$) generalized to real and complex numbers, so it can evaluate 5.5!, its just written Γ(5.5 - 1) = Γ(4.5). Math is generalization.Usually smart people figured all the generalizations out for you, so in some cases you can simply memorize a formula or look it up, but its much easier to simply remember the rules that the smart people figured out for you so you can rederive everything you need without having to memorize it. When you understand the rules (which are all carefully constructed to be consistent with each other), you can then use Mathematics as a language to express problems in. By abstracting the problem into mathematics, the answer becomes much easier to obtain. The only thing math does is make thing easier to do by abstracting them. That's all.
2. Why does college think high grades in math correspond to programming ability?
This is because programming is math. Programming is all about abstracting a problem to automate it. Think of it as a lingual descendant of Math. The problem is that in high school they teach you calculus and programming at the same time and try to tell you that they are related. They aren't. Calculus doesn't have anything to do with programming. Set Theory does. The mathematical constructs of logic are what programming derives from, not calculus. Naturally, they don't teach you any of that. Even though you can consider programming a sub-discipline of mathematics, ones programming ability is not connected to your test-taking abilities.3. How do you compose music?
First, you come up with a melody. The best way to do this is to find a song you like, and figure out its melody. Knowing basic music theory will help, because then you know what a chord progression is, so you can find that too. Simply rip off all the common chord progressions you like - you'll come up with your own later. Rhythm is important too, so take note of that - be careful to focus on notes that seem to carry the beat.Great, that was the easy part. But how do you make techno music? How do you record things? How does it get on a computer? All I have is this stupid electric piano I can record things off of, there has to be a better way! The answer is DAWs and VSTi, or Digital Audio Workstations and their virtual instrument plugins. A great DAW to start with is FL Studio, and there are a lot of free VSTi plugins floating around. VSTi plugins are simply synths or effects or other tools that you drop into your DAW and use to play notes or modify the sound. If you want natural sounding instruments, use samples. Soundfonts are widely supported, have an extension .sf2 and there are gigabytes upon gigabytes of free samples everywhere. You should try to hunt down an independent artist whose music you like, they'll often be willing to give on advice on how they create their style.
But now I've made a song, where do I post it? Soundcloud, newgrounds, last.fm, and bandcamp lets you sell it for moneys. Don't worry if you're terrible, just keep doing it over and over and over and paying attention to advice and constructive criticism.
4. How do you draw clean art?
Clean digital art is commonly done using vectorization and gradients. There are multiple photoshop techniques that can be combined with tablets to create very nice looking lines by doing fake-tapering and line adjustments, but more commonly the tablet is simply pressure sensitive. There are many different techniques for doing various styles, so its more appropriate to ask the artist themselves.5. Why do adults kiss?
I say instinct but no one really knows yet (only 90% of humans kiss). Provided you are in a culture that does kiss, you'll grow up to be around 16-17 and suddenly you'll feel this inexplicable urge to kiss whomever you've fallen in love with for no apparent reason. It's theorized to have arisen due to needing to evaluate certain proteins in a potential partner, which requires physical contact, along with various other things. I say instinct because I always thought it wasn't instinct and I wouldn't fall for it and then why am I fantasizing about kissing girls CRAP.6. Why do adults fall in love in the first place?
Instinct. By the time you are 20, if you haven't yet found an intimate partner, you will feel crushing loneliness regardless of how many friends you have. Do not underestimate just how badly Nature wants you to have babies. This is why people get desperate - the desire to be in an intimate, loving relationship can be extremely powerful. It also leaves a giant hole that often explains various other bizarre things adults do in apparent attempts to kill themselves in the most amusing way possible.7. Why don't popular people respond to fan mail very often?
This usually only comes up if you are using a bad medium. Artists often want to talk to their non-retarded fans, but the majority of people are incredibly stupid (see below), and thus in certain cases the signal-to-noise ratio is so high they simply can't justify spending the time to find you in a sea of insane idiocy when they have better things to do, like be awesome. Some artists simply don't want to be bothered, and this is usually the result of being disillusioned with how utterly stupid most people are, so it's hard to blame them, but unfortunate. Usually there will be a way to at least throw a meaningful thank you to the artist, possibly by e-mail or twitter if you look hard enough, and they will always appreciate it if they can just find your message. Never assume an artist is too stuck up and full of themselves to answer you. They just can't find you. Although quite a few of them actually are assholes.8. Why is everything I do always wrong?
Because people are idiots and have no idea what they're talking about. Only ever listen to someone telling you that you are doing something wrong if you know they have extensive experience in exactly what you are trying to do. Otherwise, take the advice with a mountain-sized lump of salt, because people in specialized professions almost always take advice out of context and inappropriately simplify it to the point of it actually being completely wrong. There is always a catch. This is taken up to eleven in programming - I once had someone who did networking tell me my choice of language for my graphics engine was completely wrong and insisted I was so bad at programming I should just stop, because it would make the world a better place. He is an asshole, and he is completely wrong. Don't listen to those people, ever.9. Why does everyone call everyone else an idiot?
BECAUSE EVERYONE IS AN IDIOT. Trying to comprehend just how unbelievably stupid people can be is one of the most difficult things to learn while growing up. It's far too easy to disregard someone as evil when in fact they really are that dumb. "Never attribute to malice that which is adequately explained by stupidity" - Hanlon's Razor. The best you can hope to do is dedicate your life to not being an idiot in your choice of profession and don't think it makes you qualified to give advice on vaguely related fields (see networking programmer above).10. Why do adults argue about everything?
Because they are 10-year-olds that have to pay taxes, and nobody really knows how to pay taxes properly. They don't know what they're doing. Common sense is not common, people are not rational, and people are idiots. They don't care if they're wrong, and they don't care if you're right. They just don't care, because life sucks, and life isn't fair, and they didn't get the memo until after they wasted their youth either being too drunk to remember anything, or studying in a library all day to get a useless scrap of paper.Do something that matters to you, and know this: Life isn't fair, so you have to make it fair. You have to do things the hard way. You have to fail miserably hundreds of times and keep on trying because you aren't going to let life win. You have to do what matters to you, no matter what anyone else thinks. You have to fight for it, and you have from now until you die to win. Go.
Cheat Codes
1. If you don't know how to properly socialize with someone, ask them about themselves. There is nothing people love more than talking about themselves.2. If you like programming, bury yourself in it. It is, by far, the most useful skill you can have right now.
3. Read the instructions.
4. Bunnies make everything better =:3