You'd think that programmers would get over these ridiculous language wars. The consensus should be that any one programmer is going to use whatever language they are most comfortable with that gets the job done most efficiently. If someone knows C/C++, C#, and Java, they're probably going to use C++ to write a console game. You can argue that language [x] is terrible because of [x], but the problem is that ALL languages are terrible for one reason or another. Every aspect of a languages design is a series of trade-offs, and if you try to criticize a language that is useful in one context because it isn't useful in another, you are ignoring the entire concept of a trade-off.
These arguments go on for hours upon hours, about what exactly is a trade-off and what languages arguably have stupid features and what makes someone a good programer and blah blah blah blah SHUT UP. I don't care what language you used, if your program is shit, your program is shit. I don't care if you wrote it in Clojure or used MongoDB or used continuations and closures in whatever esoteric functional language happens to be popular right now. Your program still sucks. If someone else writes a better program in C without any elegant use of anything, and it works better than your program, they're doing their job better than you.
I don't care if they aren't as good a programmer as you are, by whatever stupid, arbitrary standards you've invented to make yourself feel better, they're still doing a better job than you. I don't care if your haskell editor was written in haskell. Good for you. It sucks. It is terribly designed. It's workflow is about as conducive as a blob of molasses on a mountain in January. I don't care if you are using a fantastic stack of professionally designed standard libraries instead of re-inventing the wheel. That guy over there re-invented the wheel the wrong way 10 times and his program is better than yours because it's designed with the user in mind instead of a bunch of stupid libraries. I don't care if you're using Mercurial over SVN or Git on Linux using Emacs with a bunch of extensions that make you super productive. Your program still sucks.
I am sick and tired of people judging programmers on a bunch of rules that don't matter. Do you know functional programming? Do you know how to implement a LAMP stack? Obviously you don't use C++ or anything like that, do you?
These programmers have no goddamn idea what they're talking about. But that isn't what concerns me. What concerns me is that programmers are so obsessed over what language is best or what tool is best or what library they should use when they should be more concerned about what their program actually DOES. They get so caught up in building whatever elegant crap they're trying to build they completely forget what the end user experience is, especially when the end user has never used the program before. Just as you are not a slave to your tools, your program is not enslaved to your libraries.
Your program's design should serve the user, not a bunch of data structures.
December 24, 2011
December 2, 2011
The Irrationality of Idiots
If Everyone Else is Such an Idiot, How Come You're Not Rich? - Megan McArdleI run around calling a whole lot of people and/or things stupid, dump, moronic, or some other variation of idiot. As the above quote exemplifies, saying such things tends to be a bit dangerous, since if everyone else was an idiot, you should be rich as hell. My snarky reaction to that, of course, would be that I'm not rich yet (and even then, "rich" in the sense of the quote is really just a metaphor for success, depending on how you define it for yourself), but in truth there are very specific reasons I call someone an idiot, and they don't necessarily involve actual intelligence.
To me, someone is an idiot if they refuse to argue in a rational manner. If you ignore evidence or use nonsensical reasoning and logical fallacies to support your beliefs, you're an idiot. If you don't like me calling you an idiot, that's just fine, because I acknowledge your existence about as much as I acknowledge the existence of dirty clothes on my bedroom floor. It's only when there is such a pile of dirty laundry lying around that it impedes movement that I really notice and clean it up. In the case of suffocating amounts of stupidity, I usually just go somewhere else. The rest of the time, stupid people can only serve to grudgingly function in a society, not take part in running it. This is because designing and running a society requires rational thinking and logical arguments, or nothing gets done.
I can get really angry about certain things, but I must yield to opinions that have a reasonable basis, if only to acknowledge that I might be wrong, even if I think I'm not. Everything I say or do must have some sort of logical basis, even if it originated from pure intuition. So long as you can poke legitimate holes in an accepted theory, you can hold some pretty crazy opinions that can't be considered illogical, though perhaps still incredibly risky or unlikely.
All the other times I call someone an idiot, I'm usually being lazy when I should really be calling the action idiotic. For example, I can't legitimately call Mark Zuckerberg an idiot. If I call him an idiot, I'm not forming a legitimate opinion, and its probably because he did something that pissed me off and I'm ranting about it, and you are free to ignore my invalid opinion, at least until I clarify that what he did was idiotic, not him. Of course, sometimes people repeatedly do things that are just so mind-bogglingly stupid that it is entirely justified to actually call them a moron, because they are displaying a serious lack of bona fide intelligence. Usually, though, most people are entirely capable of rational thought, but simply do not care enough to exercise it, in which case their idiocy stems from an unwillingness to use rationality, not actual intelligence.
I bring this up, because it seems to be a serious problem. What happens when we lose rationality? People can't compromise anymore, and we get a bunch of stupendously idiotic proposals borne out of ignorance that no longer has to pass through a filter of logical argumentation. All irrational disputes become polarized because neither side is willing to listen to the other, and the emotions that are intrinsically tied to the dispute prevent any meaningful progress from being made. Society breaks down in the face of irrationality because irrationality refuses to acknowledge things like, people are different.
Well gee, that sounds like our current political mess.
I am an aggressive supporter of educational reform, and one of the things that I believe should be taught in schools is not only rational thought and logical arguments, but how rational thought can complement creativity and irrational emotions. We cannot rid ourselves of illogical beliefs, because then we've turned into Vulcans, but we must learn, as a species, when our emotions are appropriate, and when we need to exercise our ability to be rational agents. As it is, we are devolving into a prehistoric mess of irrational demands and opinions that only serve to drag society backwards, just as we begin unlocking the true potential of our technology.
Relevent:
December 1, 2011
The Great Mystery of Linear Gradient Lighting
A long, long time ago, in pretty much the same place I'm sitting in right now, I was learning how one would do 2D lighting with soft shadows and discovered the age old adage in 2D graphics: linear gradient lighting looks better than mathematically correct inverse square lighting.
Strange.
I brushed it off as artistic license and perceptual trickery, but over the years, as I dug into advanced lighting concepts, nothing could explain this. It was a mystery. Around the time I discovered microfacet theory I figured it could theoretically be an attempt to approximate non-lambertanian reflectance models, but even that wouldn't turn an exponential curve into a linear one.
This bizarre law even showed up in my 3D lighting experiments. Attempting to invoke the inverse square law would simply result in extremely bright and dark areas and would look absolutely terrible, and yet the only apparent fix I saw anywhere was simply calculating light via linear distance in clear violation of observed light behavior. Everywhere I looked, people calculated light on a linear basis, everywhere, on everything. Was it the equations? Perhaps the equations being used operated on linear light values instead of exponential ones and so only output the correct value if the light was linear? No, that wasn't it. I couldn't figure it out. Years and years and years would pass with this discrepancy left unaccounted for.
A few months ago I noted an article on gamma correction and assumed it was related to color correction or some other post process effect designed to compensate for monitor behavior, and put it as a very low priority research point on my mental to-do-list. No reason fixing up minor brightness problems until your graphics engine can actually render everything properly. Yesterday, though, I happened across a Hacker News posting about learning modern 3D engine programming. Curious if it had anything I didn't already know, I ran through its topics, and found this. Gamma correction wasn't just making the scene brighter to fit with the monitor, it was compensating for the fact that most images are actually already gamma-corrected.
In a nutshell, the brightness of a monitor is exponential, not linear (with a power of about 2.2). The result is that a linear gradient displayed on the monitor is not actually increasing in brightness linearly. Because it's mapped to a curve, it will actually increase in brightness exponentially. This is due to the human visual system processing luminosity on a logarithmic scale. The curve in question is this:
Source: GPU Gems 3 - Chapter 24: The Importance of Being Linear
You can see the effect in this picture, taken from the article I mentioned:
The thing is, I always assumed the top linear gradient was a linear gradient. Sure it looks a little dark, but hey, I suppose that might happen if you're increasing at 25% increments, right? WRONG. The bottom strip is a true linear gradient1. The top strip is a literal assignment of linear gradient RGB values, going from 0 to 62 to 126, etc. While this is, digitally speaking, a mathematical linear gradient, what happens when it gets displayed on the screen? It gets distorted by the CRT Gamma curve seen in the above graph, which makes the end value exponential. The bottom strip, on the other hand, is gamma corrected - it is NOT a mathematical linear gradient. It's values go from 0 to 134 to 185. As a result, when this exponential curve is displayed on your monitor, it's values are dragged down by the exact inverse exponential curve, resulting in a true linear curve. An image that has been "gamma-corrected" in this manner is said to exist in sRGB color space.
The thing is, most images aren't linear. They're actually in the sRGB color space, otherwise they'd look totally wrong when we viewed them on our monitors. Normally, this doesn't matter, which is why most 2D games simply ignore gamma completely. Because all a 2D engine does is take a pixel and display it on the screen without touching it, if you enable gamma correction you will actually over-correct the image and it will look terrible. This becomes a problem with image editing, because digital artists are drawing and coloring things on their monitors and they try to make sure that everything looks good on their monitor. So if an artist were visually trying to make a linear gradient, they would probably make something similar to the already gamma-corrected strip we saw earlier. Because virtually no image editors linearize images when saving (for good reason), the resulting image an artist creates is actually in sRGB color space, which is why only turning on gamma correction will usually simply make everything look bright and washed out, since you are normally using images that are already gamma-corrected. This is actually good thing due to subtle precision issues, but it creates a serious problem when you start trying to do lighting calculations.
The thing is, lighting calculations are linear operations. It's why you use Linear Algebra for most of your image processing needs. Because of this, when I tried to use the inverse-square law for my lighting functions, the resulting value that I was multiplying on to the already-gamma-corrected image was not gamma corrected! In order to do proper lighting, you would have to first linearize the gamma-corrected image, perform the lighting calculation on it, and then re-gamma-correct the end result.
Wait a minute, what did we say the gamma curve value was? It's $$x^{2.2}$$, so $$x^{0.45}$$ will gamma-correct the value $$x$$. But the inverse square law states that the intensity of a light is actually $$\frac{1}{x^2}$$, so if you were to gamma correct the inverse square law, you'd end up with: \[ {\bigg(\frac{1}{x^2}}\bigg)^{0.45} = {x^{-2}}^{0.45} = x^{-0.9} ≈ x^{1} \]
That's almost linear!2
OH MY GOD
That's it! The reason I saw linear curves all over the place was because it was a rough approximation to gamma correction! The reason linear lighting looks good in a 2D game is because its actually an approximation to a gamma-corrected inverse-square law! Holy shit! Why didn't anyone ever explain this?!3 Now it all makes sense! Just to confirm my findings, I went back to my 3D lighting experiment, and sure enough, after correcting the gamma values, using the inverse square law for the lighting gave correct results! MUAHAHAHAHAHAHA!
For those of you using OpenGL, you can implement gamma correction as explained in the article mentioned above. For those of you using DirectX9 (not 10), you can simply enable
1 or at least much closer, depending on your monitors true gamma response
2 In reality I'm sweeping a whole bunch of math under the table here. What you really have to do is move the inverse square curve around until it overlaps the gamma curve, then apply it, and you'll get something that is roughly linear.
3 If this is actually standard course material in a real graphics course, and I am just really bad at finding good tutorials, I apologize for the palm hitting your face right now.
Strange.
I brushed it off as artistic license and perceptual trickery, but over the years, as I dug into advanced lighting concepts, nothing could explain this. It was a mystery. Around the time I discovered microfacet theory I figured it could theoretically be an attempt to approximate non-lambertanian reflectance models, but even that wouldn't turn an exponential curve into a linear one.
This bizarre law even showed up in my 3D lighting experiments. Attempting to invoke the inverse square law would simply result in extremely bright and dark areas and would look absolutely terrible, and yet the only apparent fix I saw anywhere was simply calculating light via linear distance in clear violation of observed light behavior. Everywhere I looked, people calculated light on a linear basis, everywhere, on everything. Was it the equations? Perhaps the equations being used operated on linear light values instead of exponential ones and so only output the correct value if the light was linear? No, that wasn't it. I couldn't figure it out. Years and years and years would pass with this discrepancy left unaccounted for.
A few months ago I noted an article on gamma correction and assumed it was related to color correction or some other post process effect designed to compensate for monitor behavior, and put it as a very low priority research point on my mental to-do-list. No reason fixing up minor brightness problems until your graphics engine can actually render everything properly. Yesterday, though, I happened across a Hacker News posting about learning modern 3D engine programming. Curious if it had anything I didn't already know, I ran through its topics, and found this. Gamma correction wasn't just making the scene brighter to fit with the monitor, it was compensating for the fact that most images are actually already gamma-corrected.
In a nutshell, the brightness of a monitor is exponential, not linear (with a power of about 2.2). The result is that a linear gradient displayed on the monitor is not actually increasing in brightness linearly. Because it's mapped to a curve, it will actually increase in brightness exponentially. This is due to the human visual system processing luminosity on a logarithmic scale. The curve in question is this:
Source: GPU Gems 3 - Chapter 24: The Importance of Being Linear
You can see the effect in this picture, taken from the article I mentioned:
The thing is, I always assumed the top linear gradient was a linear gradient. Sure it looks a little dark, but hey, I suppose that might happen if you're increasing at 25% increments, right? WRONG. The bottom strip is a true linear gradient1. The top strip is a literal assignment of linear gradient RGB values, going from 0 to 62 to 126, etc. While this is, digitally speaking, a mathematical linear gradient, what happens when it gets displayed on the screen? It gets distorted by the CRT Gamma curve seen in the above graph, which makes the end value exponential. The bottom strip, on the other hand, is gamma corrected - it is NOT a mathematical linear gradient. It's values go from 0 to 134 to 185. As a result, when this exponential curve is displayed on your monitor, it's values are dragged down by the exact inverse exponential curve, resulting in a true linear curve. An image that has been "gamma-corrected" in this manner is said to exist in sRGB color space.
The thing is, most images aren't linear. They're actually in the sRGB color space, otherwise they'd look totally wrong when we viewed them on our monitors. Normally, this doesn't matter, which is why most 2D games simply ignore gamma completely. Because all a 2D engine does is take a pixel and display it on the screen without touching it, if you enable gamma correction you will actually over-correct the image and it will look terrible. This becomes a problem with image editing, because digital artists are drawing and coloring things on their monitors and they try to make sure that everything looks good on their monitor. So if an artist were visually trying to make a linear gradient, they would probably make something similar to the already gamma-corrected strip we saw earlier. Because virtually no image editors linearize images when saving (for good reason), the resulting image an artist creates is actually in sRGB color space, which is why only turning on gamma correction will usually simply make everything look bright and washed out, since you are normally using images that are already gamma-corrected. This is actually good thing due to subtle precision issues, but it creates a serious problem when you start trying to do lighting calculations.
The thing is, lighting calculations are linear operations. It's why you use Linear Algebra for most of your image processing needs. Because of this, when I tried to use the inverse-square law for my lighting functions, the resulting value that I was multiplying on to the already-gamma-corrected image was not gamma corrected! In order to do proper lighting, you would have to first linearize the gamma-corrected image, perform the lighting calculation on it, and then re-gamma-correct the end result.
Wait a minute, what did we say the gamma curve value was? It's $$x^{2.2}$$, so $$x^{0.45}$$ will gamma-correct the value $$x$$. But the inverse square law states that the intensity of a light is actually $$\frac{1}{x^2}$$, so if you were to gamma correct the inverse square law, you'd end up with: \[ {\bigg(\frac{1}{x^2}}\bigg)^{0.45} = {x^{-2}}^{0.45} = x^{-0.9} ≈ x^{1} \]
That's almost linear!2
OH MY GOD
That's it! The reason I saw linear curves all over the place was because it was a rough approximation to gamma correction! The reason linear lighting looks good in a 2D game is because its actually an approximation to a gamma-corrected inverse-square law! Holy shit! Why didn't anyone ever explain this?!3 Now it all makes sense! Just to confirm my findings, I went back to my 3D lighting experiment, and sure enough, after correcting the gamma values, using the inverse square law for the lighting gave correct results! MUAHAHAHAHAHAHA!
For those of you using OpenGL, you can implement gamma correction as explained in the article mentioned above. For those of you using DirectX9 (not 10), you can simply enable
D3DSAMP_SRGBTEXTURE on whichever texture stages are using sRGB textures (usually only the diffuse map), and then enable D3DRS_SRGBWRITEENABLE during your drawing calls (a gamma-correction stateblock containing both of those works nicely). For things like GUI, you'll probably want to bypass the sRGB part. Like OpenGL, you can also skip D3DRS_SRGBWRITEENABLE and simply gamma-correct the entire blended scene using D3DCAPS3_LINEAR_TO_SRGB_PRESENTATION in the Present() call, but this has a lot of caveats attached. In DirectX10, you no longer use D3DSAMP_SRGBTEXTURE. Instead, you use an sRGB texture format (see this presentation for details). 1 or at least much closer, depending on your monitors true gamma response
2 In reality I'm sweeping a whole bunch of math under the table here. What you really have to do is move the inverse square curve around until it overlaps the gamma curve, then apply it, and you'll get something that is roughly linear.
3 If this is actually standard course material in a real graphics course, and I am just really bad at finding good tutorials, I apologize for the palm hitting your face right now.
November 24, 2011
Signed Integers Considered Stupid (Like This Title)
Unrelated note: If you title your article "[x] considered harmful", you are a horrible person with no originality. Stop doing it.
Signed integers have always bugged me. I've seen quite a bit of signed integer overuse in C#, but it is most egregious when dealing with C/C++ libraries that, for some reason, insist on using
But really, that's not a fair example. You don't really lose anything using an integer for the i value there because its range isn't large enough. The places where this become stupid are things like using an integer for height and width, or returning a signed integer count. Why on earth would you want to return a negative count? If the count fails, return an unsigned -1, which is just the maximum possible value for your chosen unsigned integral type. Of course, certain people seem to think this is a bad idea because then you will return the largest positive number possible. What if they interpret that as a valid count and try to allocate 4 gigs of memory? Well gee, I don't know, what happens when you try to allocate -1 bytes of memory? In both cases, something is going to explode, and in both cases, its because the person using your code is an idiot. Neither way is more safe than the other. In fact, signed integers cause far more problems then they solve.
One of the most painfully obvious issues here is that virtually every single architecture in the world uses the two's complement representation of signed integers. When you are using two's complement on an 8-bit signed integer type (a
I can only attribute the vast overuse of
1 There's actually another error here in that
Signed integers have always bugged me. I've seen quite a bit of signed integer overuse in C#, but it is most egregious when dealing with C/C++ libraries that, for some reason, insist on using
for(int i = 0; i < 5; ++i). Why would you ever write that? i cannot possibly be negative and for that matter shouldn't be negative, ever. Use for(unsigned int i = 0; i < 5; ++i), for crying out loud.
But really, that's not a fair example. You don't really lose anything using an integer for the i value there because its range isn't large enough. The places where this become stupid are things like using an integer for height and width, or returning a signed integer count. Why on earth would you want to return a negative count? If the count fails, return an unsigned -1, which is just the maximum possible value for your chosen unsigned integral type. Of course, certain people seem to think this is a bad idea because then you will return the largest positive number possible. What if they interpret that as a valid count and try to allocate 4 gigs of memory? Well gee, I don't know, what happens when you try to allocate -1 bytes of memory? In both cases, something is going to explode, and in both cases, its because the person using your code is an idiot. Neither way is more safe than the other. In fact, signed integers cause far more problems then they solve.
One of the most painfully obvious issues here is that virtually every single architecture in the world uses the two's complement representation of signed integers. When you are using two's complement on an 8-bit signed integer type (a
char in C++), the largest positive value is 127, and the largest negative value is -128. That means a signed integer can represent a negative number so large it cannot be represented as a positive number. What happens when you do (char)abs(-128)? It tries to return 128, which overflows back to... -128. This is the cause of a host of security problems, and what's hilarious is that a lot of people try to use this to fuel their argument that you should use C# or Java or Haskell or some other esoteric language that makes them feel smart. The fact is, any language with fixed size integers has this problem. That means C# has it, Java has it, most languages have it to some degree. This bug doesn't mean you should stop using C++, it means you need to stop using signed integers in places they don't belong. Observe the following code:
if (*p == '*')
{
++p;
total_width += abs (va_arg (ap, int));
}
This is retarded. Why on earth are you interpreting an argument as a signed integer only to then immediately call abs() on it? So a brain damaged programmer can throw in negative values and not blow things up? If it can only possibly be valid when it is a positive number, interpret it as a unsigned int. Even if someone tries putting in a negative number, they will serve only to make the total_width abnormally large, instead of potentially putting in -128, causing abs() to return -128 and creating a total_width that is far too small, causing a buffer overflow and hacking into your program. And don't go declaring total_width as a signed integer either, because that's just stupid. Using an unsigned integer here closes a potential security hole and makes it even harder for a dumb programmer to screw things up1.
I can only attribute the vast overuse of
int to programmer laziness. unsigned int is just too long to write. Of course, that's what typedef's are for, so that isn't an excuse, so maybe they're worried a programmer won't understand how to put a -1 into an unsigned int? Even if they didn't, you could still cast the int to an unsigned int to serve the same purpose and close the security hole. I am simply at a loss as to why I see int's all over code that could never possibly be negative. If it could never possibly be negative, you are therefore assuming that it won't be negative, so it's a much better idea to just make it impossible for it to be negative instead of giving hackers 200 possible ways to break your program.
1 There's actually another error here in that
total_width can overflow even when unsigned, and there is no check for that, but that's beyond the scope of this article.
November 7, 2011
Why Kids Hate Math
They're teaching it wrong.
And I don't just mean teaching the concepts incorrectly (although they do plenty of that), I mean their teaching priorities are completely backwards. Set Theory is really fun. Basic Set Theory can be taught to someone without them needing to know how to add or subtract. We teach kids Venn Diagrams but never teach them all the fun operators that go with them? Why not? You say they won't understand? Bullshit. If we can teach third graders binary, we can teach them set theory. We take forever to get around to teaching algebra to kids, because its considered difficult. If something is a difficult conceptual leap, then you don't want to delay it, you want to introduce the concepts as early as possible. I say start teaching kids algebra once they know basic arithmetic. They don't need to know how to do crazy weird stuff like x * x = x² (they don't even know what ² means), but you can still introduce them to the idea of representing an unknown value with x. Then you can teach them exponentiation and logs and all those other operators first in the context of numbers, and then in the context of unknown variables. Then algebra isn't some scary thing that makes all those people who don't understand math give up, its something you simply grow up with.
In a similar manner, what the hell is with all those trig identities? Nobody memorizes those things! You memorize like, 2 or 3 of them, and almost only ever use sin² + cos² = 1. In a similar fashion, nobody ever uses integral trig identities because if you are using them you should have converted your coordinate system to polar coordinates, and if you can't do that then you can just look them up for crying out loud. Factoring and completing the square can be useful, but forcing students to do these problems over and over when they almost never actually show up in anything other than spoon-fed equations is insane.
Partial Fractions, on the other hand, are awesome and fun and why on earth are they only taught in intermediate calculus?! Kids are ALWAYS trying to pull apart fractions like that, and we always tell them to not do it - why not just teach them the right way to do it? By the time they finally got around to teaching me partial fractions, I was thinking that it would be some horrifically difficult, painful, complex process. It isn't. You just have to follow a few rules and then 0 out some functions. How can that possibly be harder than learning the concept of differentiation? And its useful too!
Lets say we want to teach someone basic calculus. How much do they need to know? They need to know addition, subtraction, division, multiplication, fractions, exponentiation, roots, algebra, limits, and derivatives. You could teach someone calculus without them knowing what sine and cosine even are. You could probably argue that, with proper teaching, calculus would be about as hard, or maybe a little harder, than trigonometry. Trigonometry, by the way, has an inordinate amount of time spent on it. Just tell kids how right triangles work, sine/cosine/tangent, SOHCAHTOA, a few identities, and you're good. You don't need to know scalene and isosceles triangles. Why do we even have special names for them? Who gives a shit if a triangle has sides of the same length? Either its a right triangle and its useful or its not a right triangle and you have to do some crazy sin law shit that usually means your algorithm is just wrong and so the only time you ever actually need to use it you can just look up the formula because it is a obtuse edge case that almost never comes up.
Think about that. We're grading kids by asking them to solve edge cases that never come up in reality and grading how well they are in math based off of that. And then we're confused when they complain about math having no practical application? Well duh. The sheer amount of time spent on useless topics is staggering. Calculus should be taught to high school freshman. Differential equations and complex analysis go to the seniors, and by the time you get into college you're looking at combinatorics and vector analysis, not basic calculus.
I have already seen some heavily flawed arguments against this. Some people say that people aren't interested in math, so this will never work. Since I'm saying that teaching kids advanced concepts early on will make them interested in math, this is a circular argument and invalid. Other people claim that the kids will never understand because of some bullshit about needing logical constructs, which just doesn't make sense because you should still introduce the concepts. Introducing a concept early on and having the student be confused about it is a good thing because it means they'll try to work it out over time. The more time you give them, the more likely it will click. Besides, most students aren't understanding algebra with the current system anyway, so I fail to see the point of that argument. It's not working now so don't try to change it or you'll make it worse? That's just pathetic.
TL;DR: Stop teaching kids stupid, pointless math they won't need and maybe they won't rightfully conclude that what they are being taught is useless.
And I don't just mean teaching the concepts incorrectly (although they do plenty of that), I mean their teaching priorities are completely backwards. Set Theory is really fun. Basic Set Theory can be taught to someone without them needing to know how to add or subtract. We teach kids Venn Diagrams but never teach them all the fun operators that go with them? Why not? You say they won't understand? Bullshit. If we can teach third graders binary, we can teach them set theory. We take forever to get around to teaching algebra to kids, because its considered difficult. If something is a difficult conceptual leap, then you don't want to delay it, you want to introduce the concepts as early as possible. I say start teaching kids algebra once they know basic arithmetic. They don't need to know how to do crazy weird stuff like x * x = x² (they don't even know what ² means), but you can still introduce them to the idea of representing an unknown value with x. Then you can teach them exponentiation and logs and all those other operators first in the context of numbers, and then in the context of unknown variables. Then algebra isn't some scary thing that makes all those people who don't understand math give up, its something you simply grow up with.
In a similar manner, what the hell is with all those trig identities? Nobody memorizes those things! You memorize like, 2 or 3 of them, and almost only ever use sin² + cos² = 1. In a similar fashion, nobody ever uses integral trig identities because if you are using them you should have converted your coordinate system to polar coordinates, and if you can't do that then you can just look them up for crying out loud. Factoring and completing the square can be useful, but forcing students to do these problems over and over when they almost never actually show up in anything other than spoon-fed equations is insane.
Partial Fractions, on the other hand, are awesome and fun and why on earth are they only taught in intermediate calculus?! Kids are ALWAYS trying to pull apart fractions like that, and we always tell them to not do it - why not just teach them the right way to do it? By the time they finally got around to teaching me partial fractions, I was thinking that it would be some horrifically difficult, painful, complex process. It isn't. You just have to follow a few rules and then 0 out some functions. How can that possibly be harder than learning the concept of differentiation? And its useful too!
Lets say we want to teach someone basic calculus. How much do they need to know? They need to know addition, subtraction, division, multiplication, fractions, exponentiation, roots, algebra, limits, and derivatives. You could teach someone calculus without them knowing what sine and cosine even are. You could probably argue that, with proper teaching, calculus would be about as hard, or maybe a little harder, than trigonometry. Trigonometry, by the way, has an inordinate amount of time spent on it. Just tell kids how right triangles work, sine/cosine/tangent, SOHCAHTOA, a few identities, and you're good. You don't need to know scalene and isosceles triangles. Why do we even have special names for them? Who gives a shit if a triangle has sides of the same length? Either its a right triangle and its useful or its not a right triangle and you have to do some crazy sin law shit that usually means your algorithm is just wrong and so the only time you ever actually need to use it you can just look up the formula because it is a obtuse edge case that almost never comes up.
Think about that. We're grading kids by asking them to solve edge cases that never come up in reality and grading how well they are in math based off of that. And then we're confused when they complain about math having no practical application? Well duh. The sheer amount of time spent on useless topics is staggering. Calculus should be taught to high school freshman. Differential equations and complex analysis go to the seniors, and by the time you get into college you're looking at combinatorics and vector analysis, not basic calculus.
I have already seen some heavily flawed arguments against this. Some people say that people aren't interested in math, so this will never work. Since I'm saying that teaching kids advanced concepts early on will make them interested in math, this is a circular argument and invalid. Other people claim that the kids will never understand because of some bullshit about needing logical constructs, which just doesn't make sense because you should still introduce the concepts. Introducing a concept early on and having the student be confused about it is a good thing because it means they'll try to work it out over time. The more time you give them, the more likely it will click. Besides, most students aren't understanding algebra with the current system anyway, so I fail to see the point of that argument. It's not working now so don't try to change it or you'll make it worse? That's just pathetic.
TL;DR: Stop teaching kids stupid, pointless math they won't need and maybe they won't rightfully conclude that what they are being taught is useless.
Subscribe to:
Comments (Atom)