Have you built an algorithm that mostly works? Does it account for almost everyone's needs, save for a few weird outliers that you ignore because they make up 0.0001% of the population? Congratulations, your algorithm is racist! To illustrate how this happens, let's take a recent example from Facebook. My friend's message was removed for "violating community standards". Now, my friend has had all sorts of ridiculous problems with Facebook, so to test my theory, I posted the exact same message on my page, and then had him report it.
Golly gee, look at that, Facebook confirmed the message I sent does not violate community guidelines, but he's still banned for 12 hours for posting the exact same thing. What I suspect happened is this: Facebook has gotten mad at my friend for having a weird name multiple times, but he can't prove what his name is because he doesn't have access to his birth certificate because of family problems, and he thinks someone's been falsely reporting a bunch of his messages. The algorithm for determining whether or not something is "bad" probably took these misleading inputs, combined it with a short list of so-called "dangerous" topics like "terrorism", and then decided that if anyone reported one of his messages, it was probably bad. On the other hand, I have a very western name and nobody reports anything I post, so either the report actually made it to a human being, or the algorithm simply decided it was probably fine.
Of course, the algorithm was wrong about my friend's message. But Facebook doesn't care. I'm sure a bunch of self-important programmers are just itching to tell me we can't deal with all the edge-cases in a commercial algorithm because it's infeasible to account for all of them. What I want to know is, have any of these engineers ever thought about who the edge-cases are? Have they ever thought about the kind of people who can't produce birth certificates, or don't have a driver's license, or have strange names that don't map to unicode properly because they aren't western enough?
Poor people. Minorities. Immigrants. Disabled people. All these people they claim to care about, all this talk of diversity and equal opportunity and inclusive policies, and they're building algorithms that by their very nature will exclude those less fortunate than them. Facebook's algorithm probably doesn't even know that my friend is asian, yet it's still discriminating against him. Do you know who can follow all those rules and assumptions they make about normal people? Rich people. White people. Privileged people. These algorithms benefit those who don't need help, and disproportionately punish those who don't need any more problems.
What's truly terrifying is that Silicon Valley wants to run the world, and it wants to automate everything using a bunch of inherently flawed algorithms. Algorithms that might be impossible to perfect, given the almost unlimited number of edge-cases that reality can come up with. In fact, as I am writing this article, Chrome doesn't recognize "outlier" as a word, even though Google itself does.
Of course, despite this, Facebook already built an algorithm that tries to detect "toxicity" and silences "unacceptable" opinions. Even if they could build a perfect algorithm for detecting "bad speech", do these companies really think forcibly restricting free speech will accomplish anything other than improving their own self-image? A deeply cynical part of me thinks the only thing these companies actually care about is looking good. A slightly more optimistic part of me thinks a bunch of well-meaning engineers are simply being stupid.
You can't change someone's mind by punching them in the face. Punching people in the face may shut them up, but it does not change their opinion. It doesn't fix anything. Talking to them does. I'm tired of this industry hiding problems behind shiny exteriors instead of fixing them. That's what used car salesmen do, not engineers. Programming has devolved into an art of deceit, where coders hide behind pretty animations and huge frameworks that sweep all their problems under the rug, while simultaneously screwing over the people who were supposed to benefit from an "egalitarian" industry that seems less and less egalitarian by the day.
Either silicon valley needs to start dealing with people that don't fit in neat little boxes, or it will no longer be able to push humanity forward. If we're going to move forward as a species, we have to do it together. Launching a bunch of rich people into space doesn't accomplish anything. Curing cancer for rich people doesn't accomplish anything. Inventing immortality for rich people doesn't accomplish anything. If we're going to push humanity forward, we have to push everyone forward, and that means dealing with all 7 billion outliers.
I hope silicon valley doesn't drag us back to the feudal age, but I'm beginning to think it already has.
November 1, 2017
October 10, 2017
My Little Pony And The Downfall Of Western Civilization
Our current political climate is best described as a bunch of chimpanzees angrily throwing feces at each other. Many people recognize that there are many problems with modern politics, but few agree on what those problems actually are. Liberals, republicans, Trump, millenials, capitalism, communism, too many regulations, too few regulations, gerrymandering, illegal immigrants, rich people, poor people, schools, governments, religion, secularism, the list goes on and on and on. These, however, are not problems, they are excuses. They are scapegoats we have conjured up from our mental list of things we don't like so we can ignore the ugly truth.
At some point, Americans are going to have to admit that the entire problem with this country has been summed up in a TV show for six-year-old girls designed to sell toys. My Little Pony was released seven years ago, and as the series has progressed, it has focused more and more on reforming villains instead of defeating them. Time and time again, ponies refuse to give up on those who seem lost to darkness and try to figure out what is causing them to lash out. A central theme of the show is that almost no one is truly a "villain", only misguided, misunderstood, or pushed towards lashing out after a traumatic experience. While a select few villains have been show to be truly irredeemable, this is rare, which is in line with reality. Only a very small percentage of the human population is deliberately evil as opposed to simply lashing out, being stupid, or being tragically misinformed.
It is no longer possible to argue with anyone anymore. This is not because everyone suddenly became incapable of critical thought, but because no one can agree on any facts. When we built the internet, we thought having access to the sum of all human knowledge would bring infinite prosperity, but instead it brought us infinite misinformation. Russians have been making this worse, feeding false narratives to both sides to make us hate each other. It worked. When scientists themselves have been manipulated by corporations without anyone bothering to attempt to reproduce experiments, there simply isn't any good way to verify the truth of anything. The entire point of the scientific method is to make sure something is reproducible, but a staggering number of retractions in recent years has demonstrated that barely anyone actually checks anyone else's work. This has fostered a general distrust in science, even for results that have been reproduced thousands of times, like the thoroughly debunked "Vaccines cause autism!" claim.
This kind of political polarization is untenable. We no longer live in a world where we can get into a fight, stab someone else with a sword and call it good. If our political polarization goes unchecked, it will result in nothing less than the total collapse of western civilization. Humanity will have proven itself too dumb and tribalist to wield a tool as powerful as the internet. Whatever civilization comes after us will struggle to reclaim the technological progress we enjoy, now that we've stripped the planet of resources. If we fail now, humanity will never again be capable of reaching for the stars. We will have sentenced ourselves to live on this small rock until the sun boils the oceans away, doomed by our own stupidity.
There was an episode of My Little Pony that explained the origin of their country, Equestria. The three races, Pegasi, Earth ponies, and Unicorns, hated each other. Evil forces fed on their hatred, smothering the land in snow and threatening to freeze them all to death. So, each race set out to find a new land to colonize, only to suddenly realize that each of them had wound up on the exact same new continent. The leaders of each race immediately started yelling at each other, and the blizzard returned, until each of them was encased in ice. Only when the assistants of each leader realized they didn't actually hate each other were they able to dispel the evil forces by starving them of hatred and talking sense into their leaders. The moral of this story is very clear: either we figure out how to get along, or we're all going to die. I really, honestly don't know how to explain this better than a saturday morning cartoon show about magical ponies.
The problem is that I don't know if humanity is capable of moving past this. I've seen one of my friends rapidly devolve into insane conspiracy theory nonsense, and I simply don't have the mental willpower to engage with them. I eventually had to block them, and accomplished two things at once: reinforcing their echo chamber and reinforcing my echo chamber. I tried to hold on to them as a window into conservative nonsense so my twitter wasn't a complete echo chamber, but when the other side is saying truly horrible things about you, this becomes more and more difficult. It also make it more likely for me to say truly horrible things about the other side, in a vicious, endless cycle, and I simply have too many other things to worry about to be capable of dealing with that level of toxicity. At this stage, I'm not sure humanity has the strength to actually reconcile with itself. I think there is a real possibility that our worries about AI were misplaced - the technology that might ultimately destroy us could be the internet, simply because our tribalist brains are too desperate to find someone to hate. We may be fundamentally incapable of sustaining a global, interconnected society with instantaneous communication.
At least then we'll have a very definitive answer to the Fermi Paradox.
At some point, Americans are going to have to admit that the entire problem with this country has been summed up in a TV show for six-year-old girls designed to sell toys. My Little Pony was released seven years ago, and as the series has progressed, it has focused more and more on reforming villains instead of defeating them. Time and time again, ponies refuse to give up on those who seem lost to darkness and try to figure out what is causing them to lash out. A central theme of the show is that almost no one is truly a "villain", only misguided, misunderstood, or pushed towards lashing out after a traumatic experience. While a select few villains have been show to be truly irredeemable, this is rare, which is in line with reality. Only a very small percentage of the human population is deliberately evil as opposed to simply lashing out, being stupid, or being tragically misinformed.
It is no longer possible to argue with anyone anymore. This is not because everyone suddenly became incapable of critical thought, but because no one can agree on any facts. When we built the internet, we thought having access to the sum of all human knowledge would bring infinite prosperity, but instead it brought us infinite misinformation. Russians have been making this worse, feeding false narratives to both sides to make us hate each other. It worked. When scientists themselves have been manipulated by corporations without anyone bothering to attempt to reproduce experiments, there simply isn't any good way to verify the truth of anything. The entire point of the scientific method is to make sure something is reproducible, but a staggering number of retractions in recent years has demonstrated that barely anyone actually checks anyone else's work. This has fostered a general distrust in science, even for results that have been reproduced thousands of times, like the thoroughly debunked "Vaccines cause autism!" claim.
This kind of political polarization is untenable. We no longer live in a world where we can get into a fight, stab someone else with a sword and call it good. If our political polarization goes unchecked, it will result in nothing less than the total collapse of western civilization. Humanity will have proven itself too dumb and tribalist to wield a tool as powerful as the internet. Whatever civilization comes after us will struggle to reclaim the technological progress we enjoy, now that we've stripped the planet of resources. If we fail now, humanity will never again be capable of reaching for the stars. We will have sentenced ourselves to live on this small rock until the sun boils the oceans away, doomed by our own stupidity.
There was an episode of My Little Pony that explained the origin of their country, Equestria. The three races, Pegasi, Earth ponies, and Unicorns, hated each other. Evil forces fed on their hatred, smothering the land in snow and threatening to freeze them all to death. So, each race set out to find a new land to colonize, only to suddenly realize that each of them had wound up on the exact same new continent. The leaders of each race immediately started yelling at each other, and the blizzard returned, until each of them was encased in ice. Only when the assistants of each leader realized they didn't actually hate each other were they able to dispel the evil forces by starving them of hatred and talking sense into their leaders. The moral of this story is very clear: either we figure out how to get along, or we're all going to die. I really, honestly don't know how to explain this better than a saturday morning cartoon show about magical ponies.
The problem is that I don't know if humanity is capable of moving past this. I've seen one of my friends rapidly devolve into insane conspiracy theory nonsense, and I simply don't have the mental willpower to engage with them. I eventually had to block them, and accomplished two things at once: reinforcing their echo chamber and reinforcing my echo chamber. I tried to hold on to them as a window into conservative nonsense so my twitter wasn't a complete echo chamber, but when the other side is saying truly horrible things about you, this becomes more and more difficult. It also make it more likely for me to say truly horrible things about the other side, in a vicious, endless cycle, and I simply have too many other things to worry about to be capable of dealing with that level of toxicity. At this stage, I'm not sure humanity has the strength to actually reconcile with itself. I think there is a real possibility that our worries about AI were misplaced - the technology that might ultimately destroy us could be the internet, simply because our tribalist brains are too desperate to find someone to hate. We may be fundamentally incapable of sustaining a global, interconnected society with instantaneous communication.
At least then we'll have a very definitive answer to the Fermi Paradox.
September 5, 2017
I Used To Want To Work For Google
A long time ago I thought Google was this magical company that truly cared about engineering and solving problems instead of maximizing shareholder value. Then Larry Page became CEO and I realized they were not a magical unicorn and lamented the fact that they had been transformed into "just another large company". Several important things happened between that post and now: Microsoft got a new CEO, so I decided to give them a shot and got hired there. I quit right before Windows 10 came out because I knew it was going to be a disaster. More recently, it's become apparent that Google had gone far past simply being a behemoth unconcerned with the cries of the helpless and transformed into something outright malevolent. It's silenced multiple reporters, blocked windows phone from accessing youtube out of spite, and successfully gotten an entire group of researchers fired by threatening to pull funding (but that didn't stop them).
This is evil. This is horrifying. This is the kind of stuff Microsoft did in the 90s that made everyone hate it so much they still have to fight against the repercussions of decisions made two decades ago because of the sheer amount of damage they did and lives they ruined. I'm at the point where I'd rather go back to Microsoft, whose primary sin at this point is mostly just being incompetent instead of outright evil, rather than Google, who is actually doing things that are fundamentally morally wrong. These are the kinds of decisions that are bald-faced abuses of power, without any possible "good intention" driving them. It's vile. There is no excuse.
As an ex-Microsoft employee, I can assure you that at no point did I think Microsoft was doing something evil while I was there. I haven't seen Microsoft do anything outright evil since I left, either. The few times they came close they backed off and apologized later. Microsoft didn't piss people off by being evil, it pissed people off by being dumb. I was approached by a Google recruiter shortly after I left and I briefly considered going to Google because I considered them vastly more competent, and I still do. However, no amount of engineering competency can make me want to work for a company that actively does things I consider morally reprehensible. This is the same reason I will never work for Facebook. I've drawn a line in the sand, and I find myself in the surprising situation of being on the opposite side of Google, and discovering that Microsoft, of all companies, isn't with them.
I always thought I'd be able to mostly disregard the questionable things that Google and Microsoft were doing and compare them purely on the competency of their engineers. However, it seems that Google has every intention of driving me away by doing things so utterly disgusting I could never work there and still be able to sleep at night. This worries me deeply, because as these companies get larger and larger, they eat up all the other sources of employment. Working at a startup that isn't one of the big 5 won't help if it gets bought out next month. One friend of mine with whom I shared many horror stories with worked at LinkedIn. He was not happy when he woke up one day to discover he now worked for the very company he had heard me complaining about. Even now, he's thinking of quitting, and not because Microsoft is evil - they're just so goddamn dumb.
The problem is that there aren't many other options, short of starting your own company. Google is evil, Facebook is evil, Apple is evil if you care about open hardware, Microsoft is too stupid to be evil but might at some point become evil again, and Amazon is probably evil and may or may not treat it's employees like shit depending on who you ask. Even if you don't work directly for them, you're probably using their products or services. At some point, you have to put food on the table. This is why I generally refuse to blame someone for working for an evil company because the economy punishes you for trying to stand up for your morals. It's not the workers fault, here, it's Wall Street incentivizing rotten behavior by rewarding short-term profits instead of long-term growth. A free market optimizes to a monopoly. Monopolies are bad. I don't know what people don't get about this. We're fighting over stupid shit like transgender troops or gay rights instead of just treating other human beings with decency, all the while letting rich people rob us blind as they decimate the economy. This is stupid. I would daresay it's almost more stupid than the guy at Microsoft who decided to fire all the testers.
But I guess I'll take unrelenting stupidity over retaliating against researchers for criticizing you. At least until Microsoft remembers how to be evil. Then I don't know what I'll do.
I don't know what anyone will do.
This is evil. This is horrifying. This is the kind of stuff Microsoft did in the 90s that made everyone hate it so much they still have to fight against the repercussions of decisions made two decades ago because of the sheer amount of damage they did and lives they ruined. I'm at the point where I'd rather go back to Microsoft, whose primary sin at this point is mostly just being incompetent instead of outright evil, rather than Google, who is actually doing things that are fundamentally morally wrong. These are the kinds of decisions that are bald-faced abuses of power, without any possible "good intention" driving them. It's vile. There is no excuse.
As an ex-Microsoft employee, I can assure you that at no point did I think Microsoft was doing something evil while I was there. I haven't seen Microsoft do anything outright evil since I left, either. The few times they came close they backed off and apologized later. Microsoft didn't piss people off by being evil, it pissed people off by being dumb. I was approached by a Google recruiter shortly after I left and I briefly considered going to Google because I considered them vastly more competent, and I still do. However, no amount of engineering competency can make me want to work for a company that actively does things I consider morally reprehensible. This is the same reason I will never work for Facebook. I've drawn a line in the sand, and I find myself in the surprising situation of being on the opposite side of Google, and discovering that Microsoft, of all companies, isn't with them.
I always thought I'd be able to mostly disregard the questionable things that Google and Microsoft were doing and compare them purely on the competency of their engineers. However, it seems that Google has every intention of driving me away by doing things so utterly disgusting I could never work there and still be able to sleep at night. This worries me deeply, because as these companies get larger and larger, they eat up all the other sources of employment. Working at a startup that isn't one of the big 5 won't help if it gets bought out next month. One friend of mine with whom I shared many horror stories with worked at LinkedIn. He was not happy when he woke up one day to discover he now worked for the very company he had heard me complaining about. Even now, he's thinking of quitting, and not because Microsoft is evil - they're just so goddamn dumb.
The problem is that there aren't many other options, short of starting your own company. Google is evil, Facebook is evil, Apple is evil if you care about open hardware, Microsoft is too stupid to be evil but might at some point become evil again, and Amazon is probably evil and may or may not treat it's employees like shit depending on who you ask. Even if you don't work directly for them, you're probably using their products or services. At some point, you have to put food on the table. This is why I generally refuse to blame someone for working for an evil company because the economy punishes you for trying to stand up for your morals. It's not the workers fault, here, it's Wall Street incentivizing rotten behavior by rewarding short-term profits instead of long-term growth. A free market optimizes to a monopoly. Monopolies are bad. I don't know what people don't get about this. We're fighting over stupid shit like transgender troops or gay rights instead of just treating other human beings with decency, all the while letting rich people rob us blind as they decimate the economy. This is stupid. I would daresay it's almost more stupid than the guy at Microsoft who decided to fire all the testers.
But I guess I'll take unrelenting stupidity over retaliating against researchers for criticizing you. At least until Microsoft remembers how to be evil. Then I don't know what I'll do.
I don't know what anyone will do.
August 6, 2017
Sexist Programmers Are Awful Engineers
Men and women are fundamentally different. So are white people and black people and autistic people and gay people and transgender people and conservatives and liberals and every other human being along every imaginable axis of discrimination. Some of these differences are cultural. Others are genetic. Others depend on environmental factors. These differences mean that some of us are inherently better at certain tasks than others. On average, men are better at spatial temporal reasoning, women are better at reading comprehension and writing ability, and psychopaths can sometimes be excellent CEOs.
Whenever I meet a programmer who insists on doing everything a certain way, the chances I'll hire them drop off a cliff. Just as object-oriented programming didn't fix everything, neither will functional programming, or data-oriented programming or array-based programming or any other language. They are different tools that allow you to attack a problem from different directions, much like we have different classes of algorithms to attack certain classes of problems. Greedy algorithms, lazy evaluation, dynamic programming, recursive-descent, maximum flow, all of these are different ways to approach a problem. They represent looking at a problem from different perspectives. A problem that is difficult from one angle might be trivial when examined from a different angle.
When I stumbled upon this anti-diversity memo written by a Google employee, I wonder just how dysfunctional of an engineer that person is. Problems are never solved by being closed-minded. They are solved by opening ourselves to new possibilities and exploring the problem space as an infinitely-dimensional fabric of possible configurations. You do not find possible edge-cases by being closed-minded. You find them by probing the outer edges of your solution, trying to find singularities and inflection points that hint at unusual behavior.
You cannot build a great company by hiring people who are good at the same things you are. Attempting to maximize diversity only comes at a perceived cost of aptitude if you are measuring the wrong things. If your concept of what makes a "good programmer" is an extremely narrow set of skills, then you will inevitably select towards a specific ethnicity, culture, or sex, because the tiny statistical differences will be grossly magnified by the extremely narrow job requirements. Demand that all your programmers invert a binary tree on a whiteboard and you'll filter out the guy who wrote the software 90% of your company uses.
If you think the field of computer science is really this narrow, you're a terrible programmer. Turing completeness is a fundamental property of the universe, and we are only just beginning to explore the full implications of information theory, the foundations of type theory, NP-completeness, and the nature of computation itself. Disregarding other people because they can't do something without ever considering what they can do will only hurt your team, and your company. Diversity inclusion programs shouldn't try to hire more women and ethnic groups because they're the same, they should be trying to hire them because they are different.
When hiring someone to complete a job, you should hire whoever is the best fit for the job. In a vacuum where there is a single task that needs to be completed, gender and ethnicity should be ignored in favor of a purely meritocratic assessment. However, if you have a company that must respond to a changing world, diversity can reveal solutions you never even knew existed. An established company like Google must actively seek to increase diversity so that it can explore new perspectives that may give it an edge over its rivals. They cannot select on a purely meritocratic basis, because all measures of merit would be based on what the company is already good at, not what it could be good at. You cannot explore new opportunities by hiring the same people.
Intelligent people value feedback from people who think differently than them. This is why many executives will deliberately hire people they disagree with so they can have someone challenge their views. This helps avoid creating an echo-chamber, which is the ultimate irony of a memo that's called "Google’s Ideological Echo Chamber", because scrapping the diversity inclusion programs as the memo suggests would itself create a new echo-chamber. You can't remove an echo-chamber by removing diversity - the author's premise is self-defeating. If they had stuck with only claiming that conservative ideologies should not be discriminated against, they would have been correct. Unfortunately, telling everyone they shouldn't discriminate against your perspective, which itself involves discriminating against other perspectives, is by definition a contradiction.
We aren't going to write better programs by doing the same thing we've been doing for the past 10 years. To improve is to change, and those who seek to become better software engineers must themselves embrace change, or they will be left behind to rot in the sewers of forgotten programs, maintaining rancid enterprise code for the rest of their lives. If we are unwilling to change who is writing the programs, we'll be stuck making the same thing over and over again. A business that thinks nothing needs to change is one ripe for disruption. If you really think only hiring white males who correctly answer all your questions about graph theory and B-trees will help your business in the long-term, you're an idiot.
Whenever I meet a programmer who insists on doing everything a certain way, the chances I'll hire them drop off a cliff. Just as object-oriented programming didn't fix everything, neither will functional programming, or data-oriented programming or array-based programming or any other language. They are different tools that allow you to attack a problem from different directions, much like we have different classes of algorithms to attack certain classes of problems. Greedy algorithms, lazy evaluation, dynamic programming, recursive-descent, maximum flow, all of these are different ways to approach a problem. They represent looking at a problem from different perspectives. A problem that is difficult from one angle might be trivial when examined from a different angle.
When I stumbled upon this anti-diversity memo written by a Google employee, I wonder just how dysfunctional of an engineer that person is. Problems are never solved by being closed-minded. They are solved by opening ourselves to new possibilities and exploring the problem space as an infinitely-dimensional fabric of possible configurations. You do not find possible edge-cases by being closed-minded. You find them by probing the outer edges of your solution, trying to find singularities and inflection points that hint at unusual behavior.
You cannot build a great company by hiring people who are good at the same things you are. Attempting to maximize diversity only comes at a perceived cost of aptitude if you are measuring the wrong things. If your concept of what makes a "good programmer" is an extremely narrow set of skills, then you will inevitably select towards a specific ethnicity, culture, or sex, because the tiny statistical differences will be grossly magnified by the extremely narrow job requirements. Demand that all your programmers invert a binary tree on a whiteboard and you'll filter out the guy who wrote the software 90% of your company uses.
If you think the field of computer science is really this narrow, you're a terrible programmer. Turing completeness is a fundamental property of the universe, and we are only just beginning to explore the full implications of information theory, the foundations of type theory, NP-completeness, and the nature of computation itself. Disregarding other people because they can't do something without ever considering what they can do will only hurt your team, and your company. Diversity inclusion programs shouldn't try to hire more women and ethnic groups because they're the same, they should be trying to hire them because they are different.
When hiring someone to complete a job, you should hire whoever is the best fit for the job. In a vacuum where there is a single task that needs to be completed, gender and ethnicity should be ignored in favor of a purely meritocratic assessment. However, if you have a company that must respond to a changing world, diversity can reveal solutions you never even knew existed. An established company like Google must actively seek to increase diversity so that it can explore new perspectives that may give it an edge over its rivals. They cannot select on a purely meritocratic basis, because all measures of merit would be based on what the company is already good at, not what it could be good at. You cannot explore new opportunities by hiring the same people.
Intelligent people value feedback from people who think differently than them. This is why many executives will deliberately hire people they disagree with so they can have someone challenge their views. This helps avoid creating an echo-chamber, which is the ultimate irony of a memo that's called "Google’s Ideological Echo Chamber", because scrapping the diversity inclusion programs as the memo suggests would itself create a new echo-chamber. You can't remove an echo-chamber by removing diversity - the author's premise is self-defeating. If they had stuck with only claiming that conservative ideologies should not be discriminated against, they would have been correct. Unfortunately, telling everyone they shouldn't discriminate against your perspective, which itself involves discriminating against other perspectives, is by definition a contradiction.
We aren't going to write better programs by doing the same thing we've been doing for the past 10 years. To improve is to change, and those who seek to become better software engineers must themselves embrace change, or they will be left behind to rot in the sewers of forgotten programs, maintaining rancid enterprise code for the rest of their lives. If we are unwilling to change who is writing the programs, we'll be stuck making the same thing over and over again. A business that thinks nothing needs to change is one ripe for disruption. If you really think only hiring white males who correctly answer all your questions about graph theory and B-trees will help your business in the long-term, you're an idiot.
July 30, 2017
Why I Never Built My SoundCloud Killer
While the news of SoundCloud imploding are fairly recent, musicians and producers have had a beef with the music uploading site's direction for years. I still remember when SoundCloud only gave you a paltry 2 hours worth of free upload time and managed to convert my high quality lossless WAV files to the shittiest 128 kbps MP3 I've ever heard in my life. What really pissed me off was that they demanded a ridiculous $7 a month just to double your upload time. This is in contrast to Newgrounds, a tiny website run by a dozen people with an audio portal built almost as an afterthought that still manages to be superior to every single other offering. It gives you unlimited space, for free, and lets you upload your own MP3, which allows me to upload properly encoded joint-stereo 128 kbps MP3 files, or much higher quality MP3s for songs I'm giving out for free.
Obviously, Newgrounds is only able to offer unlimited free uploads because the audio portal just piggybacks on the rest of the site. However, I was so pissed off at SoundCloud's disgusting subscription offering that I actually ran the numbers in terms of what it would cost to store lossless FLAC encodings of songs using Amazon S3. These calculations are now out of date, so I've redone them for the purposes of this blog.
The average size of an FLAC encoded song is around 60 MB, but we'll assume it's 80 MB as an upper-bound, and to include the cost of storing the joint-stereo 128 kbps streaming MP3, which is usually less than 10% the size (using OPUS would reduce this even more, but it is not supported on all browsers yet). Amazon offers the first 50 TB of storage at $0.023 per gigabyte, per month. This comes down to about $0.00184 per month, per song, in order to store the full uncompressed version. Now, obviously, we must also stream the song, but we're only streaming the low-quality version, which is 10% the size, which is about 7 MB in our example (7 MB + 70 MB is about 80 MB for storage). The vast majority of music producers on the website have almost no following, and most will be lucky to get a single viral hit. As an example, after being on SoundCloud for over 7 years, I have managed to amass a mere 100000 views total. If I somehow got 20000 views of my songs every single month, the total cost of streaming 140 GB from Amazon S3 at $0.05 per GB would be $7 per month. That's how much SoundCloud is charging just to double my storage space!
This makes even less sense when you calculate that 6 hours of FLAC would be 4.7 GB, or about 5 GB including the 128 kbps streaming MP3s. 5 GB of storage space costs a pathetic $0.12 cents a month to store on Amazon S3! All of the costs come down to bandwidth, which is relatively fixed by how many people are listening to songs, not how many songs there are. This means, if I'm paying any music service for the right to store music on their servers, I should get near unlimited storage space (maybe put in a sanity check of 10 GB max per month to limit abuse). I will point out that Clyp.it actually does this properly, giving you 6 hours of storage space for free and unlimited storage space if you pay them $6 a month.
Unfortunately, Clyp.it does not try to be SoundCloud as it has no comments, no reshares, and well, not much of anything, really. It's like a giant pastebin for sounds where you can follow people or favorite things. It's also probably screwed.
Even though I had a name and a website design, I never launched it because even if I could find a way to identify a copyrighted song via some sort of ContentID system, I couldn't make it work without the record industry's cooperation. The problem is that the system has to know what songs are illegal in order to block them in the first place. Otherwise, people could upload Justin Bieber songs with impunity and I'd still get sued out of existence. The hard part about making a site like SoundCloud isn't actually making the website, it's dealing with the insane, litigation-happy oligarchs that own the entire music industry.
SoundCloud's ordeal is mentioned in this article. Surprisingly, it took until 2012 for them to realize they had to start making deals with the major music labels. It took until 2014 for many of those deals to actually happen, and they were not in SoundCloud's favor. A deal with Warner Music Group, closed in 2014, gave Warner a 3-5% stake in the company and an undisclosed cut of ad-revenue, just so SoundCloud could have the privilege of not being sued out of existence. This wasn't even an investment round, it was just so SoundCloud could have Warner Music Group's catalog on the site and not get sued!
At this point, you have to be either very naive or very rich to go up against an industry that can and will send an army of lawyers after you. The legal system is not in your favor. It will be used to crush you like a bug and there is nothing you can do about it, because of one fundamental problem: You can't detect copyright infringement without access to the original copy.
Because of this, the music industry holds the entire world hostage with a kind of Catch-22: They demand you take down all copyright infringing material, but in order to figure out if something is copyright infringement, you need access to their songs, which they only give out on their terms, which are never in your favor.
Obviously, Newgrounds is only able to offer unlimited free uploads because the audio portal just piggybacks on the rest of the site. However, I was so pissed off at SoundCloud's disgusting subscription offering that I actually ran the numbers in terms of what it would cost to store lossless FLAC encodings of songs using Amazon S3. These calculations are now out of date, so I've redone them for the purposes of this blog.
The average size of an FLAC encoded song is around 60 MB, but we'll assume it's 80 MB as an upper-bound, and to include the cost of storing the joint-stereo 128 kbps streaming MP3, which is usually less than 10% the size (using OPUS would reduce this even more, but it is not supported on all browsers yet). Amazon offers the first 50 TB of storage at $0.023 per gigabyte, per month. This comes down to about $0.00184 per month, per song, in order to store the full uncompressed version. Now, obviously, we must also stream the song, but we're only streaming the low-quality version, which is 10% the size, which is about 7 MB in our example (7 MB + 70 MB is about 80 MB for storage). The vast majority of music producers on the website have almost no following, and most will be lucky to get a single viral hit. As an example, after being on SoundCloud for over 7 years, I have managed to amass a mere 100000 views total. If I somehow got 20000 views of my songs every single month, the total cost of streaming 140 GB from Amazon S3 at $0.05 per GB would be $7 per month. That's how much SoundCloud is charging just to double my storage space!
This makes even less sense when you calculate that 6 hours of FLAC would be 4.7 GB, or about 5 GB including the 128 kbps streaming MP3s. 5 GB of storage space costs a pathetic $0.12 cents a month to store on Amazon S3! All of the costs come down to bandwidth, which is relatively fixed by how many people are listening to songs, not how many songs there are. This means, if I'm paying any music service for the right to store music on their servers, I should get near unlimited storage space (maybe put in a sanity check of 10 GB max per month to limit abuse). I will point out that Clyp.it actually does this properly, giving you 6 hours of storage space for free and unlimited storage space if you pay them $6 a month.
Unfortunately, Clyp.it does not try to be SoundCloud as it has no comments, no reshares, and well, not much of anything, really. It's like a giant pastebin for sounds where you can follow people or favorite things. It's also probably screwed.
Even though I had a name and a website design, I never launched it because even if I could find a way to identify a copyrighted song via some sort of ContentID system, I couldn't make it work without the record industry's cooperation. The problem is that the system has to know what songs are illegal in order to block them in the first place. Otherwise, people could upload Justin Bieber songs with impunity and I'd still get sued out of existence. The hard part about making a site like SoundCloud isn't actually making the website, it's dealing with the insane, litigation-happy oligarchs that own the entire music industry.
SoundCloud's ordeal is mentioned in this article. Surprisingly, it took until 2012 for them to realize they had to start making deals with the major music labels. It took until 2014 for many of those deals to actually happen, and they were not in SoundCloud's favor. A deal with Warner Music Group, closed in 2014, gave Warner a 3-5% stake in the company and an undisclosed cut of ad-revenue, just so SoundCloud could have the privilege of not being sued out of existence. This wasn't even an investment round, it was just so SoundCloud could have Warner Music Group's catalog on the site and not get sued!
At this point, you have to be either very naive or very rich to go up against an industry that can and will send an army of lawyers after you. The legal system is not in your favor. It will be used to crush you like a bug and there is nothing you can do about it, because of one fundamental problem: You can't detect copyright infringement without access to the original copy.
Because of this, the music industry holds the entire world hostage with a kind of Catch-22: They demand you take down all copyright infringing material, but in order to figure out if something is copyright infringement, you need access to their songs, which they only give out on their terms, which are never in your favor.
July 24, 2017
Integrating LuaJIT and Autogenerating C Bindings In Visual Studio
Lua is a popular scripting language due to its tight integration with C. LuaJIT is an extremely fast JIT compiler for Lua that can be integrated into your game, which also provides an FFI Library that directly interfaces with C functions, eliminating most overhead. However, the FFI library only accepts a subset of the C standard. Specifically, "C declarations are not passed through a C pre-processor, yet. No pre-processor tokens are allowed, except for #pragma pack." The website suggests running the header file through a preprocesser stage, but I have yet to find a LuaJIT tutorial that actually explains how to do this. Instead, all the examples simply copy+paste the function prototype into the Lua file itself. Doing this with makefiles and GCC is trivial, because you just have to add a compile step using the
This is a bit of a pain, and recently I've been trying to automate my build process and dependencies using vcpkg to act as a C++ package manager. A port of LuaJIT is included in the latest update of vcpkg, but if you want one that always statically links to the CRT, you can get it here.
An important note: the build instructions for LuaJIT state that you should copy the lua scripts contained in
Once you have LuaJIT built, you should add it's library file to your project. This library file is called lua51.lib (and the dll is lua51.dll), because LuaJIT is designed as a drop-in replacement for the default Lua runtime. Now we need to actually load Lua in our program and integrate it with our code. To do this, use
There is one function in particular that you probably want to overload, and that is the
First, we need to modify Lua's
At this point, we have a bit of a choice to make. It's more convenient to have the C functions available in the global namespace, which is what this script does, because this simplifies calling them from an interactive console. However, using
Now we need to set up our compilation so that Visual Studio generates this file without any predefined constants and outputs the resulting lua script to our
Of course, the other question is how to call Lua functions from our C++ code directly. There are many, many different implementations of this available, of varying amounts of safety and completeness, but to get you started, here is a very simple implementation in C++ using templates. Note that this does not handle errors - you can change it to use
-E
option, but integrating this with Visual Studio is more difficult. In addition, I'll show you how to properly load scripts and modify the PATH lookup variable so your game can have a proper scripts
folder instead of dumping everything in bin
.Compilation
To begin, we need to download LuaJIT and get it to actually compile. Doing this manually isn't too difficult, simply open an x64 Native Tools Command Prompt (or x86 Native Tools if you want 32-bit), navigate tosrc/msvcbuild.bat
and run msvcbuild.bat
. The default options will build an x64 or x86 dll with dynamic linking to the CRT. If you want a static lib file, you need to run it with the static
option. If you want static linking to the CRT so you don't have to deal with that annoying Visual Studio Runtime Library crap, you'll have to modify the .bat file directly. Specifically, you need to find %LJCOMPILE% /MD
and change it to %LJCOMPILE% /MT
. This will then compile the static lib or dll with static CRT linking to match your other projects.This is a bit of a pain, and recently I've been trying to automate my build process and dependencies using vcpkg to act as a C++ package manager. A port of LuaJIT is included in the latest update of vcpkg, but if you want one that always statically links to the CRT, you can get it here.
An important note: the build instructions for LuaJIT state that you should copy the lua scripts contained in
src/jit
to your application folder. What it doesn't mention is that this is optional - those scripts contain debugging instructions for the JIT engine, which you probably don't need. It will work just fine without them.Once you have LuaJIT built, you should add it's library file to your project. This library file is called lua51.lib (and the dll is lua51.dll), because LuaJIT is designed as a drop-in replacement for the default Lua runtime. Now we need to actually load Lua in our program and integrate it with our code. To do this, use
lua_open()
, which returns a lua_State*
pointer. You will need that lua_State*
pointer for everything else you do, so store it somewhere easy to get to. If you are building a game using an Entity Component System, it makes sense to build a LuaSystem
that stores your lua_State*
pointer.Initialization
The next step is to load in all the standard Lua libraries usingluaL_openlibs(L)
. Normally, you shouldn't do this if you need script sandboxing for player-created scripts. However, LuaJIT's FFI library is inherently unsafe. Any script with access to the FFI library can call any kernel API it wants, so you should be extremely careful about using LuaJIT if this is a use-case for your game. We can also register any C functions we want to the old-fashioned way via lua_register
, but this is only useful for functions that don't have C analogues (due to having multiple return values, etc).There is one function in particular that you probably want to overload, and that is the
print()
function. By default, Lua will simply print to standard out, but if you aren't redirecting standard out to your in-game console, you probably have your own std::ostream
(or even a custom stream class) that is sent all log messages. By overloading print()
, we can have our Lua scripts automatically write to both our log file and our in-game console, which is extremely useful. Here is a complete re-implementation of print
that outputs to an arbitrary std::ostream&
object:int PrintOut(lua_State *L, std::ostream& out)
{
int n = lua_gettop(L); /* number of arguments */
if(!n)
return 0;
int i;
lua_getglobal(L, "tostring");
for(i = 1; i <= n; i++)
{
const char *s;
lua_pushvalue(L, -1); /* function to be called */
lua_pushvalue(L, i); /* value to print */
lua_call(L, 1, 1);
s = lua_tostring(L, -1); /* get result */
if(s == NULL)
return luaL_error(L, LUA_QL("tostring") " must return a string to "
LUA_QL("print"));
if(i > 1) out << "\t";
out << s;
lua_pop(L, 1); /* pop result */
}
out << std::endl;
return 0;
}
To overwrite the existing print
function, we need to first define a Lua compatible shim function. In this example, I pass std::cout
as the target stream:int lua_Print(lua_State *L)
{
return PrintOut(L, std::cout);
}
Now we simply register our lua_Print
function using lua_register(L, "print", &lua_Print)
. If we were doing this in a LuaSystem
object, our constructor would look like this:LuaSystem::LuaSystem()
{
L = lua_open();
luaL_openlibs(L);
lua_register(L, "print", &lua_Print);
}
To clean up our Lua instance, we need to both trigger a final GC iteration to clean up any dangling memory, and then we call lua_close(L)
, so our destructor would look like this:LuaSystem::~LuaSystem()
{
lua_gc(L, LUA_GCCOLLECT, 0);
lua_close(L);
L = 0;
}
Loadings Scripts via Require
At this point most tutorials skip to the part where you load a Lua script and write "Hello World", but we aren't done yet. Integrating Lua into your game means loading scripts and/or arbitrary strings as Lua code while properly resolving dependencies. If you don't do this, any one of your scripts that relies on another script will have to dorequire("full/path/to/script.lua")
. We also face another problem - if we want to have a scripts
folder where we simply automatically load every single script into our workspace, simply loading them all can cause duplicated code, because luaL_loadfile
does not have any knowledge of require
. You can solve this by simply loading a single bootstrap.lua
script which then loads all your game's scripts via require
, but we're going to build a much more robust solution.First, we need to modify Lua's
PATH
variable, or the variable that controls where it looks up scripts relative to our current directory. This function will append a path (which should be of the form "path/to/scripts/?.lua"
) to the beginning of the PATH
variable, giving it highest priority, which you can then use to add as many script
directories as you want in your game, and any lua script from any of those folders will then be able to require()
a script from any other folder in PATH
without a problem. Obviously, you should probably only add one or two folders, because you don't want to deal with potential name conflicts in your script files.int AppendPath(lua_State *L, const char* path)
{
lua_getglobal(L, "package");
lua_getfield(L, -1, "path"); // get field "path" from table at top of stack (-1)
std::string npath = path;
npath.append(";");
npath.append(lua_tostring(L, -1)); // grab path string from top of stack
lua_pop(L, 1);
lua_pushstring(L, npath.c_str());
lua_setfield(L, -2, "path"); // set the field "path" in table at -2 with value at top of stack
lua_pop(L, 1); // get rid of package table from top of stack
return 0;
}
Next, we need a way to load all of our scripts using require()
so that Lua properly resolves the dependencies. To do this, we create a function in C that literally calls the require()
function for us:int Require(lua_State *L,const char *name)
{
lua_getglobal(L, "require");
lua_pushstring(L, name);
int r = lua_pcall(L, 1, 1, 0);
if(!r)
lua_pop(L, 1);
WriteError(L, r, std::cout);
return r;
}
By using this to load all our scripts, we don't have to worry about loading them in any particular order - require
will ensure everything gets loaded correctly. An important note here is WriteError()
, which is a generic error handling function that processes Lua errors and writes them to a log. All errors in lua will return a nonzero error code, and will usually push a string containing the error message to the stack, which must then be popped off, or it'll mess things up later.void WriteError(lua_State *L, int r, std::ostream& out)
{
if(!r)
return;
if(!lua_isnil(L, -1)) // Check if a string was pushed
{
const char* m = lua_tostring(L, -1);
out << "Error " << r << ": " << m << std::endl;
lua_pop(L, 1);
}
else
out << "Error " << r << std::endl;
}
Automatic C Binding Generation
Fantastic, now we're all set to load up our scripts, but we still need to somehow define a header file and also load that header file into LuaJIT's FFI library so our scripts have direct access to our program's exposed C functions. One way to do this is to just copy+paste your C function definitions into a Lua file in your scripts folder that is then automatically loaded. This, however, is a pain in the butt and is error-prone. We want to have a single source of truth for our function definitions, which means defining our entire LuaJIT C API in a single header file, which is then loaded directly into LuaJIT. Predictably, we will accomplish this by abusing the C preprocessor:#ifndef __LUA_API_H__
#define __LUA_API_H__
#ifndef LUA_EXPORTS
#define LUAFUNC(ret, name, ...) ffi.cdef[[ ret lua_##name(__VA_ARGS__); ]]; name = ffi.C.lua_##name
local ffi = require("ffi")
ffi.cdef[[ // Initial struct definitions
#else
#define LUAFUNC(ret, name, ...) ret __declspec(dllexport) lua_##name(__VA_ARGS__)
extern "C" { // Ensure C linkage is being used
#endif
struct GameInfo
{
uint64_t DashTail;
uint64_t MaxDash;
};
typedef const char* CSTRING; // VC++ complains about having const char* in macros, so we typedef it here
#ifndef LUA_EXPORTS
]] // End struct definitions
#endif
LUAFUNC(CSTRING, GetGameName);
LUAFUNC(CSTRING, IntToString, int);
LUAFUNC(void, setdeadzone, float);
#ifdef Everglade_EXPORTS
}
#endif
#endif
The key idea here is to use macros such that, when we pass this through the preprocessor without any predefined constants, it will magically turn into a valid Lua script. However, when we compile it in our C++ project, our project defines LUA_EXPORTS
, and the result is a valid C header. Our C LUAFUNC
is set up so that we're using C linkage for our structs and functions, and that we're exporting the function via __declspec(dllexport)
. This obviously only works for Visual Studio so you'll want to set up a macro for the GCC version, but I will warn you that VC++ got really cranky when i tried to use a macro for that in my code, so you may end up having to redefine the entire LUAFUNC
macro for each compiler.At this point, we have a bit of a choice to make. It's more convenient to have the C functions available in the global namespace, which is what this script does, because this simplifies calling them from an interactive console. However, using
ffi.C.FunctionName
is significantly faster. Technically the fastest way is declaring local C = ffi.C
at the top of a file and then calling the functions via C.FunctionName
. Luckily, importing the functions into the global namespace does not preclude us from using the "fast" way of calling them, so our script here imports them into the global namespace for ease of use, but in our scripts we can use the C.FunctionName
method instead. Thus, when outputting our Lua script, our LUAFUNC
macro wraps our function definition in a LuaJIT ffi.cdef
block, and then runs a second Lua statement that brings the function into the global namespace. This is why we have an initial ffi.cdef
code block for the structs up top, so we can include that second lua statement after each function definition.Now we need to set up our compilation so that Visual Studio generates this file without any predefined constants and outputs the resulting lua script to our
scripts
folder, where our other in-game scripts can automatically load it from. We can accomplish this using a Post-Build Event (under Configuration Properties -> Build Events -> Post-Build Event), which then runs the following code:CL LuaAPI.h /P /EP /u COPY "LuaAPI.i" "../bin/your/script/folder/LuaAPI.lua" /YVisual Studio can sometimes be finicky about that newline, but if you put in two statements on two separate lines, it should run both commands sequentially. You may have to edit the project file directly to convince it to actually do this. The key line here is
CL LuaAPI.h /P /EP /u
, which tells the compiler to preprocess the file and output it to a *.i
file. There is no option to configure the output file, it will always be the exact same file but with a .i
extension, so we have to copy and rename it ourselves to our scripts folder using the COPY
command.Loading and Calling Lua Code
We are now set to load all our lua scripts in our script folder viaRequire
, but what if we want an interactive Lua console? There are lua functions that read strings, but to make this simpler, I will provide a function that loads a lua script from an arbitrary std::istream
and outputs to an arbitrary std::ostream
:const char* _luaStreamReader(lua_State *L, void *data, size_t *size)
{
static char buf[CHUNKSIZE];
reinterpret_cast(data)->read(buf, CHUNKSIZE);
*size = reinterpret_cast(data)->gcount();
return buf;
}
int Load(lua_State *L, std::istream& s, std::ostream& out)
{
int r = lua_load(L, &_luaStreamReader, &s, 0);
if(!r)
{
r = lua_pcall(L, 0, LUA_MULTRET, 0);
if(!r)
PrintOut(L, out);
}
WriteError(L, r, out);
return r;
}
Of course, the other question is how to call Lua functions from our C++ code directly. There are many, many different implementations of this available, of varying amounts of safety and completeness, but to get you started, here is a very simple implementation in C++ using templates. Note that this does not handle errors - you can change it to use
lua_pcall
and check the return code, but handling arbitrary Lua errors is nontrivial.template<class T, int N>
struct LuaStack;
template<class T> // Integers
struct LuaStack<T, 1>
{
static inline void Push(lua_State *L, T i) { lua_pushinteger(L, static_cast<lua_Integer>(i)); }
static inline T Pop(lua_State *L) { T r = (T)lua_tointeger(L, -1); lua_pop(L, 1); return r; }
};
template<class T> // Pointers
struct LuaStack<T, 2>
{
static inline void Push(lua_State *L, T p) { lua_pushlightuserdata(L, (void*)p); }
static inline T Pop(lua_State *L) { T r = (T)lua_touserdata(L, -1); lua_pop(L, 1); return r; }
};
template<class T> // Floats
struct LuaStack<T, 3>
{
static inline void Push(lua_State *L, T n) { lua_pushnumber(L, static_cast<lua_Number>(n)); }
static inline T Pop(lua_State *L) { T r = static_cast<T>(lua_touserdata(L, -1)); lua_pop(L, 1); return r; }
};
template<> // Strings
struct LuaStack<std::string, 0>
{
static inline void Push(lua_State *L, std::string s) { lua_pushlstring(L, s.c_str(), s.size()); }
static inline std::string Pop(lua_State *L) { size_t sz; const char* s = lua_tolstring(L, -1, &sz); std::string r(s, sz); lua_pop(L, 1); return r; }
};
template<> // Boolean
struct LuaStack<bool, 1>
{
static inline void Push(lua_State *L, bool b) { lua_pushboolean(L, b); }
static inline bool Pop(lua_State *L) { bool r = lua_toboolean(L, -1); lua_pop(L, 1); return r; }
};
template<> // Void return type
struct LuaStack<void, 0> { static inline void Pop(lua_State *L) { } };
template<typename T>
struct LS : std::integral_constant<int,
std::is_integral<T>::value +
(std::is_pointer<T>::value * 2) +
(std::is_floating_point<T>::value * 3)>
{};
template<typename R, int N, typename Arg, typename... Args>
inline R _callLua(const char* function, Arg arg, Args... args)
{
LuaStack<Arg, LS<Arg>::value>::Push(_l, arg);
return _callLua<R, N, Args...>(function, args...);
}
template<typename R, int N>
inline R _callLua(const char* function)
{
lua_call(_l, N, std::is_void<R>::value ? 0 : 1);
return LuaStack<R, LS<R>::value>::Pop(_l);
}
template<typename R, typename... Args>
inline R CallLua(lua_State *L, const char* function, Args... args)
{
lua_getglobal(L, function);
return _callLua<R, sizeof...(Args), Args...>(L, function, args...);
}
Now you have everything you need for an extensible Lua scripting implementation for your game engine, and even an interactive Lua console, all using LuaJIT. Good Luck!
June 24, 2017
Discord: Rise Of The Bot Wars
The most surreal experience I ever had on discord was when someone PMed me to complain that my anti-spam bot wasn't working against a 200+ bot raid. I pointed out that it was never designed for large-scale attacks, and that discord's own rate-limiting would likely make it useless. He revealed he was selling spambot accounts at a rate of about $1 for 100 unique accounts and that he was being attacked by a rival spammer. My anti-spam bot had been dragged into a turf war between two spambot networks. We discussed possible mitigation strategies for worst-case scenarios, but agreed that most of them would involve false-positives and that discord showed no interest in fixing how exploitable their API was. I hoped that I would never have to implement such extreme measures into my bot.
Yesterday, our server was attacked by over 40 spambots, and after discord's astonishingly useless "customer service" response, I was forced to do exactly that.
I immediately went to work building an anti-spam bot that muted anyone sending more than 4 messages per second. Building a program in a hostile environment like this is much different from writing a desktop app or a game, because the bot had to be bulletproof - it had to rate-limit itself and could not be allowed to crash, ever. Any bug that allowed a user to crash the bot was treated as P0, because it could be used by an attacker to cripple the server. Despite using a very simplistic spam detection algorithm, this turned out to be highly effective. Of course, back then, discord didn't have rate limiting, or verification, or role hierarchies, or searching chat logs, or even a way to look up where your last ping was, so most spammers were probably not accustomed to having to deal with any kind of anti-spam system.
I added raid detection, autosilence, an isolation channel, and join alerts, but eventually we were targeted by a group from 4chan's /pol/ board. Because this was a sustained attack, they began crafting spam attacks timed just below the anti-spam threshold. This forced me to implement a much more sophisticated anti-spam system, using a heat algorithm with a linear decay rate, which is still in use today. This improved anti-spam system eventually made the /pol/ group give up entirely. I'm honestly amazed the simplistic "X messages in Y seconds" approach worked as long as it did.
Of course, none of this can defend against a large scale attack. As I learned by my chance encounter with an actual spammer, it was getting easier and easier to amass an army of spambots to assault a channel instead of just using one or two.
This was a botched raid, but the bots that actually worked started spamming within 5 seconds of joining, giving the moderators a very narrow window to respond. The real problem, however, is that so many of them joined, the bot's API calls to add a role to silence them were rate-limited. They also sent messages once every 0.9 seconds, which is designed to get around Discord's rate limiting. This amounted to 33 messages sent every second, but it was difficult for the anti-spam to detect. Had the spambots reduced their spam cadence to 3 seconds or more, this attack could have bypassed the anti-spam detection entirely. My bot now instigates a lockdown by raising the verification level when a raid is detected, but it simply can't silence users fast enough to deal with hundreds of spambots, so at some point the moderators must use a mass ban function. Of course, banning is restricted by the global rate limit, because Discord has no mass ban API endpoint, but luckily the global rate limit is something like 50 requests per second, so if you're only banning people, you're probably okay.
However, a hostile attacker could sneak bots in one-by-one every 10 minutes or so, avoiding setting off the raid alarm, and then activate them all at once. 500 bots sending randomized messages chosen from an English dictionary once every 5 seconds after sneaking them in over a 48 hour period is the ultimate attack, and one that is almost impossible to defend against, because this also bypasses the 10-minute verification level. As a weapon of last resort, I added a command that immediately bans all users that sent their first message within the past two minutes, but, again, banning is subject to the global rate limit! In fact, the rate limits can change at any time, and while message deletion has a higher rate limit for bots, bans don't.
The only other option is to disable the @everyone role from being able to speak on any channel, but you have to do this on a per channel basis, because Discord ignores you if you attempt to globally disable sending message permissions for @everyone. Even then, creating an "approved" role doesn't work because any automated assignment could be defeated by adding bots one by one. The only defense a small Discord server has is to require moderator approval for every single new user, which isn't a solution - you've just given up having a public Discord server. It's only a matter of time until any angry 13-year-old can buy a sophisticated attack with a week's allowance. What will happen to public Discord servers then? Do we simply throw up our hands and admit that humanity is so awful we can't even have public communities anymore?
The general advice that is usually given here is "just ban them", which is terrible advice because Discord's own awful message handling makes it incredibly easy to trigger a false positive. If a message fails to send, the client simply sends a completely new message, with it's own ID, and will continue re-sending the message until an Ack is received, at which point the user has probably send 3 or 4 copies of the same message, each of which have the same content, but completely unique IDs and timestamps, which looks completely identical to a spam attack.
Technically speaking, this is done because Discord assigns snowflake IDs server-side, so each message attempt sent by the client must have a unique snowflake assigned after it is sent. However, it can also be trivially fixed by adding an optional "client ID" field to the message, with a client-generated ID that stays the same if the message is resent due to a network failure. That way, the server (or the other clients) can simply drop any duplicate messages with identical client IDs while still ensuring all messages have unique IDs across their distributed cluster. This would single-handedly fix all duplicate messages across the entire platform, and eliminate almost every single false-positive I've seen in my anti-spam bot.
Excuse me, WHAT?! Sorry about somebody spamming your service with horrifying gore images, but please don't delete them! What happens if the spammers just delete the messages themselves? What happens if they send child porn? "Sorry guys, please ignore the images that are literally illegal to even look at, but we can't delete them because Discord is fucking stupid." Does Discord understand the concept of marking messages for deletion so they are viewable for a short time as evidence for law enforcement?! My anti-spam bot's database currently has more information than Discord's own servers! If this had involved child porn, the FBI would have had to ask me for my records because Discord would have deleted them all!
Obviously, we're not going to leave 500+ gore messages sitting in the chatroom while Discord's ass-backwards abuse team analyzes them. I just have to hope my own nuclear option can ban them quickly enough, or simply give up the entire concept of having a public Discord server.
The problem is that the armies of spambots that were once reserved for the big servers are now so easy and so trivial to make that they're beginning to target smaller servers, servers that don't have the resources or the means to deal with that kind of large scale DDoS attack. So instead, I have to fight the growing swarm alone, armed with only a crippled, rate-limited bot of my own, and hope the dragons flying overhead don't notice.
What the fuck, Discord.
Yesterday, our server was attacked by over 40 spambots, and after discord's astonishingly useless "customer service" response, I was forced to do exactly that.
A Brief History of Discord Bots
Discord is built on a REST API, which was reverse engineered by late 2015 and used to make unofficial bots. To test out their bots, they would hunt for servers to "raid", invite their bots to the server, then spam so many messages it would softlock the client, because discord still didn't have any rate limiting. Naturally, as the designated punching bags of the internet, furries/bronies/Twilight fans/slash fiction writers/etc. were among the first targets. The attack on our server was so severe it took us almost 5 minutes of wrestling with an unresponsive client to ban them. Ironically, a few of the more popular bots today, such as "BooBot", are banned as a result of that attack, because the first thing the bot creator did was use it to raid our server.I immediately went to work building an anti-spam bot that muted anyone sending more than 4 messages per second. Building a program in a hostile environment like this is much different from writing a desktop app or a game, because the bot had to be bulletproof - it had to rate-limit itself and could not be allowed to crash, ever. Any bug that allowed a user to crash the bot was treated as P0, because it could be used by an attacker to cripple the server. Despite using a very simplistic spam detection algorithm, this turned out to be highly effective. Of course, back then, discord didn't have rate limiting, or verification, or role hierarchies, or searching chat logs, or even a way to look up where your last ping was, so most spammers were probably not accustomed to having to deal with any kind of anti-spam system.
I added raid detection, autosilence, an isolation channel, and join alerts, but eventually we were targeted by a group from 4chan's /pol/ board. Because this was a sustained attack, they began crafting spam attacks timed just below the anti-spam threshold. This forced me to implement a much more sophisticated anti-spam system, using a heat algorithm with a linear decay rate, which is still in use today. This improved anti-spam system eventually made the /pol/ group give up entirely. I'm honestly amazed the simplistic "X messages in Y seconds" approach worked as long as it did.
Of course, none of this can defend against a large scale attack. As I learned by my chance encounter with an actual spammer, it was getting easier and easier to amass an army of spambots to assault a channel instead of just using one or two.
Anatomy Of A Modern Spambot Attack
At peak times (usually during summer break), our server gets raided 1-2 times per day. These minor raids are often just 2-3 tweens who either attempt to troll the chat, or use a basic user script to spam an offensive message. Roughly 60-70% of these raids are either painfully obvious or immediately trigger the anti-spam bot. About 20% of the raids involve slightly intelligent attempts to troll the chat by being annoying without breaking the rules, which usually take about 5-10 minutes to be "exposed". About 5-10% of the raids are large, involving 8 or more people, but they are also very obvious and can be easily confined to an isolation channel. Problems arise, however, with large spambot raids. Below is a timeline of the recent spambot attack on our server:messages
19:41:25
19:41:45
19:42:05
19:42:25
19:42:45
This was a botched raid, but the bots that actually worked started spamming within 5 seconds of joining, giving the moderators a very narrow window to respond. The real problem, however, is that so many of them joined, the bot's API calls to add a role to silence them were rate-limited. They also sent messages once every 0.9 seconds, which is designed to get around Discord's rate limiting. This amounted to 33 messages sent every second, but it was difficult for the anti-spam to detect. Had the spambots reduced their spam cadence to 3 seconds or more, this attack could have bypassed the anti-spam detection entirely. My bot now instigates a lockdown by raising the verification level when a raid is detected, but it simply can't silence users fast enough to deal with hundreds of spambots, so at some point the moderators must use a mass ban function. Of course, banning is restricted by the global rate limit, because Discord has no mass ban API endpoint, but luckily the global rate limit is something like 50 requests per second, so if you're only banning people, you're probably okay.
However, a hostile attacker could sneak bots in one-by-one every 10 minutes or so, avoiding setting off the raid alarm, and then activate them all at once. 500 bots sending randomized messages chosen from an English dictionary once every 5 seconds after sneaking them in over a 48 hour period is the ultimate attack, and one that is almost impossible to defend against, because this also bypasses the 10-minute verification level. As a weapon of last resort, I added a command that immediately bans all users that sent their first message within the past two minutes, but, again, banning is subject to the global rate limit! In fact, the rate limits can change at any time, and while message deletion has a higher rate limit for bots, bans don't.
The only other option is to disable the @everyone role from being able to speak on any channel, but you have to do this on a per channel basis, because Discord ignores you if you attempt to globally disable sending message permissions for @everyone. Even then, creating an "approved" role doesn't work because any automated assignment could be defeated by adding bots one by one. The only defense a small Discord server has is to require moderator approval for every single new user, which isn't a solution - you've just given up having a public Discord server. It's only a matter of time until any angry 13-year-old can buy a sophisticated attack with a week's allowance. What will happen to public Discord servers then? Do we simply throw up our hands and admit that humanity is so awful we can't even have public communities anymore?
The Discord API Hates You
The rate-limits imposed on Discord API endpoints are exacerbated by temporary failures, and that's excluding network issues. Thus, if I attempt to set a silence role on a spammer that just joined, the API will repeatedly claim they do not exist. In fact, 3 separate API endpoints consistently fail to operate properly during a raid: A "member joined" event won't show up for several seconds, but if I fall back to callingGetMember()
, this also claims the member doesn't exist, which means adding the role also fails! So I have to attempt to silence the user with every message they send until Discord actually adds the role, even though the API failures are also counted against the rate limit! This gets completely absurd once someone assaults your server with 1000 spambots, because this triggers all sorts of bottlenecks that normally aren't a problem. The alert telling you a user has joined? Rate limited. It'll take your bot 5-10 minutes to get through just telling you such a gigantic spambot army joined, unless you include code specifically designed to detect these situations and reduce the number of alerts. Because of this, a single user can trigger something like 5-6 API requests, all of which are counted against your global rate limit and can severely cripple a bot.The general advice that is usually given here is "just ban them", which is terrible advice because Discord's own awful message handling makes it incredibly easy to trigger a false positive. If a message fails to send, the client simply sends a completely new message, with it's own ID, and will continue re-sending the message until an Ack is received, at which point the user has probably send 3 or 4 copies of the same message, each of which have the same content, but completely unique IDs and timestamps, which looks completely identical to a spam attack.
Technically speaking, this is done because Discord assigns snowflake IDs server-side, so each message attempt sent by the client must have a unique snowflake assigned after it is sent. However, it can also be trivially fixed by adding an optional "client ID" field to the message, with a client-generated ID that stays the same if the message is resent due to a network failure. That way, the server (or the other clients) can simply drop any duplicate messages with identical client IDs while still ensuring all messages have unique IDs across their distributed cluster. This would single-handedly fix all duplicate messages across the entire platform, and eliminate almost every single false-positive I've seen in my anti-spam bot.
Discord Doesn't Care
Sadly, Discord doesn't seem to care. The general advice in response to "how do I defend against a large scale spam attack" is "just report them to us", so we did exactly that, and then got what has to be one of the dumbest customer service e-mails I've ever seen in my life:Excuse me, WHAT?! Sorry about somebody spamming your service with horrifying gore images, but please don't delete them! What happens if the spammers just delete the messages themselves? What happens if they send child porn? "Sorry guys, please ignore the images that are literally illegal to even look at, but we can't delete them because Discord is fucking stupid." Does Discord understand the concept of marking messages for deletion so they are viewable for a short time as evidence for law enforcement?! My anti-spam bot's database currently has more information than Discord's own servers! If this had involved child porn, the FBI would have had to ask me for my records because Discord would have deleted them all!
Obviously, we're not going to leave 500+ gore messages sitting in the chatroom while Discord's ass-backwards abuse team analyzes them. I just have to hope my own nuclear option can ban them quickly enough, or simply give up the entire concept of having a public Discord server.
The problem is that the armies of spambots that were once reserved for the big servers are now so easy and so trivial to make that they're beginning to target smaller servers, servers that don't have the resources or the means to deal with that kind of large scale DDoS attack. So instead, I have to fight the growing swarm alone, armed with only a crippled, rate-limited bot of my own, and hope the dragons flying overhead don't notice.
What the fuck, Discord.
June 1, 2017
Programmers Should Take Linguistics
The older I get, the more I realize that 90% of all disagreements or social drama results from a miscommunication of some kind. Every time I wind up having to resolve a dispute, I'll try to get both sides of the story, only to realize that they're the same story, and both parties were actually either in agreement or fighting over a perceived insult that never actually existed. Unsurprisingly, a disproportionate amount of this miscommunication often involves programmers being far too strict with their interpretations of what certain words mean.
A linguistics class teaches you about what a language actually is - a bunch of sounds that we mutually agree mean certain things. Language is, intrinsically, a social construct. The correct definition of a word is whatever the majority of your primary social circle thinks it is. However, this also means that if you interact with a secondary social circle, and they all think the word means something else, then whenever you interact with them, it really does mean something else. Language is inherently contextual, and the possible meanings of a word can change based on who is saying it and to whom they're saying it to. If everyone else on Earth has decided that 'literally' can also mean 'figuratively', then it does, even if the dictionary says otherwise. It also means most people don't actually care if you say jif or gif, they'll just say whatever pronunciation gets you to shut up about it.
It's important to realize that a word's meaning is not defined by a dictionary, but rather by how people use it. The dictionary is simply a reflection of it's usage, and is generally a few years out of date. Just as the pronunciation of a word can vary by dialect, so can the potential meanings of a word. Meanings can be invented by regional subdialects and spread outward from there, which is the origin of many slang terms. Sometimes we invent entirely new words, like "dubstep", but young words may have fuzzy definitions. In some dialects of electronic music listeners, "dubstep" is not actually a a specific genre, but instead refers to all electronic music. Using dubstep to refer to any electronic song is currently incorrect if used in general parlance, because most people think it is referring to a very specific kind of music. However, if this usage of the word continues to be popularized, eventually the meaning of the word will change into a synonym for electronica, and the dictionaries will be updated to reflect this.
The fluid nature of language is why prescriptive grammar is almost always unnecessary, unless you are deliberately conforming to a grammar standard for a specific medium, such as writing a story. In almost any other context, so long as everyone in your social group understands your 'dialect' of English, then it is valid grammar. However, if you attempt to use this dialect outside of your social circle with people who are not familiar with it, you will once again be in the wrong, as they will have no idea what you're talking about. This, however, does not mean there are no mandatory grammar rules, it's just that most of the rules that are actually necessary to speak the language properly are usually so ingrained that you don't even think about them.
A fantastic example of this is a little known rule in English where all adjectives must come in a very specific order:
However, this does not mean all sentences that are difficult to understand have incorrect grammar. In fact, even some words are completely ambiguous by default. If I say I'm "dusting an object", the meaning of the phrase is completely dependent on what the object is. If it's a cake, I'm probably dusting it with something. If it's a shelf, I'm probably dusting it to get rid of the dust.
Programmers tend to be very literal minded people, and often like to think that language is a set of strict rules defined by their English class. In reality, language is a fluid, dynamic, ambiguous, constantly changing enigma that exists entirely because we all agree on what a bunch of sounds mean. We need to recognize this, and when we communicate to other people, we need to be on the lookout for potential misinterpretations of what we say, so we can provide clarifications when possible. If someone says something that seems ridiculous, ask them to clarify. I'm tired of resolving disagreements that exist only because nobody stopped to ask the other side to clarify what they meant.
Stop demanding that everyone explain things in a way you'll understand. That's impossible, because everyone understands language slightly differently. Instead, ask for clarification if someone seems to be saying something unusual or before you debate a point they made. Maybe then we can keep the debates to actual disagreements, instead of arguing over communication failures.
A linguistics class teaches you about what a language actually is - a bunch of sounds that we mutually agree mean certain things. Language is, intrinsically, a social construct. The correct definition of a word is whatever the majority of your primary social circle thinks it is. However, this also means that if you interact with a secondary social circle, and they all think the word means something else, then whenever you interact with them, it really does mean something else. Language is inherently contextual, and the possible meanings of a word can change based on who is saying it and to whom they're saying it to. If everyone else on Earth has decided that 'literally' can also mean 'figuratively', then it does, even if the dictionary says otherwise. It also means most people don't actually care if you say jif or gif, they'll just say whatever pronunciation gets you to shut up about it.
It's important to realize that a word's meaning is not defined by a dictionary, but rather by how people use it. The dictionary is simply a reflection of it's usage, and is generally a few years out of date. Just as the pronunciation of a word can vary by dialect, so can the potential meanings of a word. Meanings can be invented by regional subdialects and spread outward from there, which is the origin of many slang terms. Sometimes we invent entirely new words, like "dubstep", but young words may have fuzzy definitions. In some dialects of electronic music listeners, "dubstep" is not actually a a specific genre, but instead refers to all electronic music. Using dubstep to refer to any electronic song is currently incorrect if used in general parlance, because most people think it is referring to a very specific kind of music. However, if this usage of the word continues to be popularized, eventually the meaning of the word will change into a synonym for electronica, and the dictionaries will be updated to reflect this.
The fluid nature of language is why prescriptive grammar is almost always unnecessary, unless you are deliberately conforming to a grammar standard for a specific medium, such as writing a story. In almost any other context, so long as everyone in your social group understands your 'dialect' of English, then it is valid grammar. However, if you attempt to use this dialect outside of your social circle with people who are not familiar with it, you will once again be in the wrong, as they will have no idea what you're talking about. This, however, does not mean there are no mandatory grammar rules, it's just that most of the rules that are actually necessary to speak the language properly are usually so ingrained that you don't even think about them.
A fantastic example of this is a little known rule in English where all adjectives must come in a very specific order:
opinion-size-age-shape-color-origin-material-purpose Noun
. So you can have a lovely little old rectangular green French silver whittling knife, but if you switch the order of any of those adjectives you'll sound like a maniac. Conversely, you can never have a green great dragon. Despite the fact that this grammar rule is basically never talked about in any prescriptive grammar book, it is mandatory because if you don't follow it you won't be speaking proper English and people will have difficulty understanding you. True grammar rules are ones that, if not followed, result in nonsensical sentences that are difficult or impossible to parse correctly.However, this does not mean all sentences that are difficult to understand have incorrect grammar. In fact, even some words are completely ambiguous by default. If I say I'm "dusting an object", the meaning of the phrase is completely dependent on what the object is. If it's a cake, I'm probably dusting it with something. If it's a shelf, I'm probably dusting it to get rid of the dust.
Programmers tend to be very literal minded people, and often like to think that language is a set of strict rules defined by their English class. In reality, language is a fluid, dynamic, ambiguous, constantly changing enigma that exists entirely because we all agree on what a bunch of sounds mean. We need to recognize this, and when we communicate to other people, we need to be on the lookout for potential misinterpretations of what we say, so we can provide clarifications when possible. If someone says something that seems ridiculous, ask them to clarify. I'm tired of resolving disagreements that exist only because nobody stopped to ask the other side to clarify what they meant.
Stop demanding that everyone explain things in a way you'll understand. That's impossible, because everyone understands language slightly differently. Instead, ask for clarification if someone seems to be saying something unusual or before you debate a point they made. Maybe then we can keep the debates to actual disagreements, instead of arguing over communication failures.
May 4, 2017
Why Bother Making An App?
I have an idea for an app. According to startup literature, I'm supposed to get initial fundraising from small-time investors, or "angel" investors, possibly with help from an incubator. Then, after using this money to build an MVP and push the product on the marketplace, I do a Series A round with actual venture capitalists. Now, the venture capitalists probably won't give me any money unless I can give them a proper financial outlook, user growth metrics, and a solid plan for expansion, along with a market cap estimation. Alternatively, I can just use enough meaningless buzzwords and complete bullshit to convince them to give me $120 million for a worthless piece of junk.
Either way, venture capitalists usually want a sizable 10-30% stake in your company (depending on if it's Series A, Series B, or Series C), given how much money they're pouring into a company that might fail. That's okay though, because my app does reasonably well and sells lots of copies on the app store and journalists write about it. Unfortunately, soon sales start tapering off, and ad revenue declines because customers either purchase the pro version or block the ads entirely. While the company is financially stable and making a modest profit, this isn't enough for the investors. They want growth, they need user engagement, they need ever increasing profits. Simply building a stable company isn't enough for them.
So the investors start pushing for you to be bought out. You get lucky, and your app would make a great accessory to Google Assistant, or Cortana, and you get huge buyout offers from Microsoft, Google, and Amazon, because they have more money than most small countries. Investors immediately push for you to take the most lucrative offer from whoever is willing to give you the most cash for fucking over all of your customers. You can push back, but your power is limited, because those investors hold a significant chunk of your company. At best, you can pick the offer that is least likely to completely destroy your product.
If you get lucky, your cross-platform app that worked on everything gets discontinued and re-integrated into one device that people have to buy due to vendor lock-in. If you aren't lucky, your app gets discontinued and completely forgotten about, until someone else comes up with the same idea and the process repeats. Maybe this time they'll get bought out and actually integrated into something.
Either way, your customers lose. Every time. They are punished for believing that a new app, by some new company, could actually survive long enough to be useful to them without being consumed by the corporate monstrosities that run the world. If the company founders are nice, maybe some of the employees walk away rich, but most of them will probably just end up trapped inside a corporate behemoth until they can't take it anymore and finally quit. In your efforts to make the world a better place, you've managed to screw over your company, your customers, and even your employees, because investors don't care about your product, they care about milking you for all you're worth.
But hey, at least you're rich, right?
Either way, venture capitalists usually want a sizable 10-30% stake in your company (depending on if it's Series A, Series B, or Series C), given how much money they're pouring into a company that might fail. That's okay though, because my app does reasonably well and sells lots of copies on the app store and journalists write about it. Unfortunately, soon sales start tapering off, and ad revenue declines because customers either purchase the pro version or block the ads entirely. While the company is financially stable and making a modest profit, this isn't enough for the investors. They want growth, they need user engagement, they need ever increasing profits. Simply building a stable company isn't enough for them.
So the investors start pushing for you to be bought out. You get lucky, and your app would make a great accessory to Google Assistant, or Cortana, and you get huge buyout offers from Microsoft, Google, and Amazon, because they have more money than most small countries. Investors immediately push for you to take the most lucrative offer from whoever is willing to give you the most cash for fucking over all of your customers. You can push back, but your power is limited, because those investors hold a significant chunk of your company. At best, you can pick the offer that is least likely to completely destroy your product.
If you get lucky, your cross-platform app that worked on everything gets discontinued and re-integrated into one device that people have to buy due to vendor lock-in. If you aren't lucky, your app gets discontinued and completely forgotten about, until someone else comes up with the same idea and the process repeats. Maybe this time they'll get bought out and actually integrated into something.
Either way, your customers lose. Every time. They are punished for believing that a new app, by some new company, could actually survive long enough to be useful to them without being consumed by the corporate monstrosities that run the world. If the company founders are nice, maybe some of the employees walk away rich, but most of them will probably just end up trapped inside a corporate behemoth until they can't take it anymore and finally quit. In your efforts to make the world a better place, you've managed to screw over your company, your customers, and even your employees, because investors don't care about your product, they care about milking you for all you're worth.
But hey, at least you're rich, right?
March 23, 2017
Companies Can't Be Apolitical
One of the most common things I hear from people is that companies should be "apolitical". The most formal way this concept is expressed is that a company should make decisions based on what maximizes profits and not political opinions. Unfortunately, the statement "companies should only care about maximizing profits" is, itself, a political statement (and one I happen to disagree with). Thus, it is fundamentally impossible for a company to be truly apolitical, for the very act of attempting to be apolitical is a political statement.
How much a company can avoid politics generally depends on both the type and size of the company. Once your company becomes large enough, it will influence politics simply by virtue of its enormous size, and eventually becomes an integral part of political debates whether or wants to or not. Large corporations must take into account the political climate when making business decisions, because simply attempting to blindly maximize profit may turn the public against them and destroy their revenue sources—thus, politics themselves become part of the profit equation, and cannot be ignored. Certain types of businesses embody political statements simply by existing. Grindr, for example, is a dating app for gay men. It's entire business model is dependent on enabling an activity that certain fundamentalists consider inherently immoral.
You could, theoretically, try to solve part of this quandary by saying that companies should also be amoral, insofar that the free market should decide moral values. The fundamentalists would then protest the companies existence by not using it (but then, they never would have used it in the first place). However, the problem is that, once again, this very statement is itself political in nature. Thus, by either trying to be amoral or moral, a company is making a political statement.
The issue at play here is that literally everything is political. When most everyone agrees on basic moral principles, it's easier to pretend that politics is really just about economic policy and lawyers, but our current political divisions have demonstrated that this is a fantasy. Politics are the fundamental morals that society has decided on. It's just a lot easier to argue about minor differences in economic policy instead of fundamental differences in basic morality.
Of course, how companies participate in politics is also important to consider. Right now, a lot of companies participate in politics by spending exorbitant amounts of money on lobbyists. This is a symptom of money in general, and should be solved not by removing corporate money from politics, but removing all money, because treating spending money as a form of speech gives more speech to the rich, which inherently discriminates against the poor and violates the constitutional assertion that all men are created equal (but no one really seems to be paying attention to that line anyway).
Instead of using money, corporations should do things that uphold whatever political values they believe in. As the saying goes, actions speak louder than words (or money, in this case). You could support civil rights activism by being more inclusive with your hiring and promoting a diverse work environment. Or, if you live in the Philippines, you could create an app that helps death squads hunt down drug users so they can be brutally executed. What's interesting is that most people consider the latter to be a moral issue as opposed to a political one, which seems to derive from the fact that once you agree on most fundamental morals, we humans simply make up a bunch of pointless rules to satisfy our insatiable desire to tell other humans they're wrong.
We've lived in a civilized world for so long, we've forgotten the true roots of politics: a clash between our fundamental moral beliefs, not about how much parking fines should be. Your company will make a political statement whether you like it or not, so you'd better make sure it's the one you want.
How much a company can avoid politics generally depends on both the type and size of the company. Once your company becomes large enough, it will influence politics simply by virtue of its enormous size, and eventually becomes an integral part of political debates whether or wants to or not. Large corporations must take into account the political climate when making business decisions, because simply attempting to blindly maximize profit may turn the public against them and destroy their revenue sources—thus, politics themselves become part of the profit equation, and cannot be ignored. Certain types of businesses embody political statements simply by existing. Grindr, for example, is a dating app for gay men. It's entire business model is dependent on enabling an activity that certain fundamentalists consider inherently immoral.
You could, theoretically, try to solve part of this quandary by saying that companies should also be amoral, insofar that the free market should decide moral values. The fundamentalists would then protest the companies existence by not using it (but then, they never would have used it in the first place). However, the problem is that, once again, this very statement is itself political in nature. Thus, by either trying to be amoral or moral, a company is making a political statement.
The issue at play here is that literally everything is political. When most everyone agrees on basic moral principles, it's easier to pretend that politics is really just about economic policy and lawyers, but our current political divisions have demonstrated that this is a fantasy. Politics are the fundamental morals that society has decided on. It's just a lot easier to argue about minor differences in economic policy instead of fundamental differences in basic morality.
Of course, how companies participate in politics is also important to consider. Right now, a lot of companies participate in politics by spending exorbitant amounts of money on lobbyists. This is a symptom of money in general, and should be solved not by removing corporate money from politics, but removing all money, because treating spending money as a form of speech gives more speech to the rich, which inherently discriminates against the poor and violates the constitutional assertion that all men are created equal (but no one really seems to be paying attention to that line anyway).
Instead of using money, corporations should do things that uphold whatever political values they believe in. As the saying goes, actions speak louder than words (or money, in this case). You could support civil rights activism by being more inclusive with your hiring and promoting a diverse work environment. Or, if you live in the Philippines, you could create an app that helps death squads hunt down drug users so they can be brutally executed. What's interesting is that most people consider the latter to be a moral issue as opposed to a political one, which seems to derive from the fact that once you agree on most fundamental morals, we humans simply make up a bunch of pointless rules to satisfy our insatiable desire to tell other humans they're wrong.
We've lived in a civilized world for so long, we've forgotten the true roots of politics: a clash between our fundamental moral beliefs, not about how much parking fines should be. Your company will make a political statement whether you like it or not, so you'd better make sure it's the one you want.
March 7, 2017
I Can't Hear Anything Below 80 Hz*
* at a comfortable listening volume.EDIT: I have confirmed all the results presented here by taking the low frequency test with someone standing physically next to me. They heard a tone beginning at 30 Hz, and by the time I could hear a very faint tone around 70 Hz, they described the tone as "conversation volume level", which is about 60 dB. I did not reach this perceived volume level until about 120 Hz, which strongly correlates with the experiment. More specific results would require a professional hearing test.
For almost 10 years, I've suspected that something was wrong with my ability to hear bass tones. Unfortunately, while everyone is used to people having difficulty hearing high tones, nobody takes you seriously if you tell them you have difficulty hearing low tones, because most audio equipment has shitty bass response, and human hearing isn't very precise at those frequencies in the first place. People generally say "oh you're just supposed to feel the bass, don't worry about it." This was extremely frustrating, because one of my hobbies is writing music, and I have struggled for years and years to do proper bass mixing, which is basically the only activity on the entire planet that actually requires hearing subtle changes in bass frequencies. This is aggravated by the fact that most hearing tests are designed to detect issues with high frequencies, not low frequencies, so all the basic hearing tests I took at school gave test results back that said "perfectly normal". Since I now have professional studio monitor speakers, I'm going to use science to prove that I have an abnormal frequency sensitivity curve that severely hampers my ability to differentiate bass tones. Unfortunately, at the moment I live alone and nowhere near anyone else, so I will have to prove that my equipment is not malfunctioning without being able to actually hear it.
Before performing the experiment, I did this simple test as a sanity check. At a normal volume level, I start to hear a very faint tone in that example at about 70 Hz. When I sent it to several other people, they all reported hearing a tone around 20-40 Hz, even when using consumer-grade hardware. This is clear evidence that something is very, very wrong, but I have to prove that my hardware is not malfunctioning before I can definitively state that I have a problem with my hearing.
For this experiment, I will be using two JBL Professional LSR305 studio monitors plugged into a Focusrite Scarlett 2i2. Since these are studio monitors, they should have a roughly linear response all the way down to 20 Hz. I'm going to use a free sound pressure app on my android phone to prove that they have a relatively linear response time. The app isn't suitable for measuring very quiet or very loud sounds, but we won't be measuring anything past 75 dB in this experiment because I don't want to piss off my neighbors.
The studio monitor manages to put out relatively stable noise levels until it appears to fall off at 50 Hz. However, when I played a tone of 30 Hz at a volume loud enough for me to feel, the sound monitor still reported no pressure, which means the microphone can't detect anything lower than 50 Hz (I was later able to prove that the studio monitor is working properly when someone came to visit). Of course, I can't hear anything below 50 Hz anyway, no matter how loud it is, so this won't be a problem for our tests. To compensate for the variance in the frequency response volume, I use the sound pressure app to measure the actual sound intensity being emitted by the speakers.
The first part of the experiment will detect the softest volume at which I can detect a tone at any frequency, starting from D3 (293 Hz) and working down note by note. The loudness of the tone is measured using the sound pressure app. For frequencies above 200 Hz, I can detect tones at volumes only slightly above the background noise in my apartment (15 dB). By the time we reach 50 Hz I was unwilling to go any louder (and the microphone would have stopped working anyway), but this is already enough for us to establish 50 Hz as the absolute limit of my hearing ability under normal circumstances.
To get a better idea of my frequency response at more reasonable volumes, I began with a D4 (293 Hz) tone playing at a volume that corresponded to 43 dB SPL on my app, and then recorded the sound pressure level of each note once it's volume seemed to match with the other notes. This gives me a rough approximation of the 40 phon equal loudness curve, and allows me to overlay that curve on to the ISO 226:2003 standard:
These curves make it painfully obvious that my hearing is severely compromised below 120 Hz, and becomes nonexistent past 50 Hz. Because I can still technically hear bass at extremely loud volumes, I can pass a hearing test trying to determine if I can hear low tones, but the instant the tones are not presented in isolation, they are drowned out by higher frequencies due to my impaired sensitivity. Because all instruments that aren't pure sine waves produce harmonics above the fundamental frequency, this means the only thing I'm hearing when a sub-bass is playing are the high frequency harmonics. Even then, I can still feel bass if it's loud enough, so the bass experience isn't completely ruined for me, but it makes mixing almost impossible because of how bass frequencies interact with the waveform. Bass frequencies take up lots of headroom, which is why in a trance track, you can tell where the kicks are just by looking at the waveform itself:
When mixing, you must carefully balance the bass with the rest of the track. If you have too much bass, it will overwhelm the rest of the frequencies. Because of this, when I send my tracks to friends to get help on mixing, I can tell that the track sounds better, but I can't tell why. The reason is because they are adjusting bass frequencies I literally cannot hear. All I can hear is the end result, which has less frequency crowding, which makes the higher frequencies sound better, even though I can't hear any other difference in the track, so it seemes like black magic.
It's even worse because I am almost completely incapable of differentiating tones below 120 Hz. You can play any note below B2 and I either won't be able to hear it or it'll sound the same as all the other notes. I can only consistently differentiate semitones above 400 Hz. Between 120-400 Hz, I can sometimes tell them apart, but only when playing them in total isolation. When they're embedded in a song, it's hopeless. This is why, in AP Music Theory, I was able to perfectly transcribe all the notes in the 4-part writing, except the bass, yet no other students seemed to have this problem. My impaired sensitivity to low frequencies mean they get drowned out by higher frequencies, making it more and more difficult to differentiate bass notes. In fact, in most rock songs, I can't hear the bass guitar at all. The only way for me to hear the bass guitar is for it to be played by itself.
Incidentally, this is probably why I hate dubstep.
For testing purposes, I've used the results of my sensitivity testing to create an EQ filter that mimics my hearing problems as best I can. I can't tell if the filter is on or off. For those of you that use FL Studio, the preset can be downloaded here.
Below is a song I wrote some time ago that was mastered by a friend who can actually hear bass, so hopefully the bass frequencies in this are relatively normal. I actually have a bass synth in this song I can only barely hear, and had to rely almost entirely on the sequencer to know which notes were which.
This is the same song with the filter applied:
By inverting this filter, I can attempt to "correct" for my bass hearing, although this is only effective down to about 70 Hz, which unfortunately means the entire sub-bass spectrum is simply inaudible to me. To accomplish this, I combine the inverted filter with a mastering plugin that completely removes all frequencies below 60 Hz (because I can't hear them) and then lowers the volume by about 8 dB so the amplified bass doesn't blow the waveform up. This doesn't seem to produce any audible effect on songs without significant bass, but when I tried it on a professionally mastered trance song, I was able to hear a small difference in the bass kick. I also tried it on Brothers In Arms and, for the first time, noticed a very faint bass cello going on that I had never heard before. If you are interested, the FL studio mixer state track that applies the corrective filter is available here, but for normal human beings the resulting bass is probably offensively loud. For that same reason, it is unfortunately impractical for me to use, because listening to bass frequencies at near 70 dB levels is bad for your hearing, and for that matter it doesn't fix my impaired fidelity anyway, but at least I now know why bass mixing has been so difficult for me over the years.
I guess if I'm going to continue trying to write music, I need to team up with one of my friends that can actually hear bass.
February 13, 2017
Windows Won't Let My Program Crash
It's been known for a while that windows has a bad habit of eating your exceptions if you're inside a WinProc callback function. This behavior can cause all sorts of mayhem, like your program just vanishing into thin air without any error messages due to a stack overflow that terminated the program without actually throwing an exception. What I didn't realize is that it also eats
While trying to find a way to fix this, I discovered that there are no less than 4 different ways windows can choose to eat exceptions from your program. I had already told the kernel to stop eating my exceptions using the following code:
Any good developer should be using assertions to verify their assumptions, so having assertions silently fail and then corrupt the program is even worse than ignoring they exist! Now you could have an assertion in your code that's firing, terminating that callback, leaving your program in a broken state, and then the next message that's processed blows up for strange and bizarre reasons that make no sense because they're impossible.
I have a hard enough time getting my programs to work, I didn't think it'd be this hard to make them crash.
assert()
, which makes debugging hell, because the assertion would throw, the entire user callback would immediately terminate without any stack unwinding, and then windows would just... keep going, even though the program is now in a laughably corrupt state, because only half the function executed.While trying to find a way to fix this, I discovered that there are no less than 4 different ways windows can choose to eat exceptions from your program. I had already told the kernel to stop eating my exceptions using the following code:
HMODULE kernel32 = LoadLibraryA("kernel32.dll"); assert(kernel32 != 0); tGetPolicy pGetPolicy = (tGetPolicy)GetProcAddress(kernel32, "GetProcessUserModeExceptionPolicy"); tSetPolicy pSetPolicy = (tSetPolicy)GetProcAddress(kernel32, "SetProcessUserModeExceptionPolicy"); if(pGetPolicy && pSetPolicy && pGetPolicy(&dwFlags)) pSetPolicy(dwFlags & ~EXCEPTION_SWALLOWING); // Turn off the filterHowever, despite this, COM itself was wrapping an entire
try {} catch {}
statement around my program, so I had to figure out how to turn that off, too. Apparently some genius at Microsoft decided the default behavior should be to just swallow exceptions whenever they were making COM, and now they can't change this default behavior because it'd break all the applications that now depend on COM eating their exceptions to run properly! So, I turned that off with this code:CoInitialize(NULL); // do this first if(SUCCEEDED(CoInitializeSecurity(NULL, -1, NULL, NULL, RPC_C_AUTHN_LEVEL_PKT_PRIVACY, RPC_C_IMP_LEVEL_IMPERSONATE, NULL, EOAC_DYNAMIC_CLOAKING, NULL))) { IGlobalOptions *pGlobalOptions; hr = CoCreateInstance(CLSID_GlobalOptions, NULL, CLSCTX_INPROC_SERVER, IID_PPV_ARGS(&pGlobalOptions)); if(SUCCEEDED(hr)) { hr = pGlobalOptions->Set(COMGLB_EXCEPTION_HANDLING, COMGLB_EXCEPTION_DONOT_HANDLE); pGlobalOptions->Release(); } }There are two additional functions that could be swallowing exceptions in your program:
_CrtSetReportHook2
and SetUnhandledExceptionFilter
, but both of these are for SEH or C++ exceptions, and I was throwing an assertion, not an exception. I was actually able to verify, by replacing the assertion #define
with my own version, that throwing an actual C++ exception did crash the program... but an assertion didn't. Specifically, an assertion calls abort()
, which raises SIGABRT
, which crashes any normal program. However, it turns out that Windows was eating the abort signal, along with every other signal I attempted to raise, which is a problem, because half the library is written in C, and C obviously can't raise C++ exceptions. The assertion failure even showed up in the output... but didn't crash the program!Assertion failed! Program: ...udio 2015\Projects\feathergui\bin\fgDirect2D_d.dll File: fgEffectBase.cpp Line: 20 Expression: sizeof(_constants) == sizeof(float)*(4*4 + 2)No matter what I do, Windows refuses to let the assertion failure crash the program, or even trigger a breakpoint in the debugger. In fact, calling the
__debugbreak()
intrinsic, which outputs an int 3
CPU instruction, was completely ignored, as if it simply didn't exist. The only reliable way to actually crash the program without using C++ exceptions was to do something like divide by 0, or attempt to write to a null pointer, which triggers a segfault.Any good developer should be using assertions to verify their assumptions, so having assertions silently fail and then corrupt the program is even worse than ignoring they exist! Now you could have an assertion in your code that's firing, terminating that callback, leaving your program in a broken state, and then the next message that's processed blows up for strange and bizarre reasons that make no sense because they're impossible.
I have a hard enough time getting my programs to work, I didn't think it'd be this hard to make them crash.
February 12, 2017
Owlboy And The Tragedy of Human Nature
Owlboy is a game developed by D-Pad studios over a protracted 9 year development cycle. Every aspect of the game is a work of art, meticulously pieced together with delicate care. While it has mostly gotten well-deserved positive reviews from critics, some people have voiced disappointment at the story arc and how the game was eventually resolved. Note: I'm about to talk about how the game ends, so an obligatory warning that there are massive spoilers ahead. If you haven't already played it, go buy it, right now.
On the pirate mothership, it is revealed that the titular "owlboy" is not referring to Otus, but is actually referring to the mysterious cloaked figure, who is actually Solus. Otus merely serves as a distraction so that Solus could steal the relics from the pirates. Once the heroes follow Solus up to Mesos and beat him into submission, it is revealed that Solus masterminded the events of the entire game, promising the pirates power in exchange for retrieving the relics, then using Otus as a distraction to steal the relics and use them to power the Anti-Hex to save the world.
Unfortunately, as the heroes immediately point out, this resulted in the destruction of Advent and the deaths of countless innocent people. Had Solus just asked for help, all of this could have been avoided, and the world could have been saved without incident. Solus admits that he felt he had no choice, as he simply had nobody he could trust. Our heroes offer to help finish the ritual, at which point Molstrom shows up and ruins everyone's day. Solus' methods have finally backfired on him, and he is now too injured to complete the ritual—but Otus isn't. In an act of desperation, he imbues Otus with the power of the artifacts, and Otus is able to complete the anti-hex in his stead, obliterating Molstrom in the process and saving the world.
A lot of people take issue with this ending for two reasons: One, it means almost everything you fought for in the game technically meant nothing, because you are actually working against Solus the entire time. Two, the entire thing could have been avoided if Solus had just trusted someone, anyone, instead of engineering a ridiculously convoluted plot to get what he needed through deceit and betrayal. Otus and Solus here represent a hero and an anti-hero. They are both fundamentally good people with flawed goals for opposite reasons. Solus is doing the right thing for the wrong reasons, whereas Otus is doing the wrong thing for the right reasons. Solus is doing the right thing, which is saving the world, but he achieves it by sacrificing thousands of innocent lives and betraying everyone. Otus is unknowingly dooming the world to destruction, but only because he and everyone else is operating on faulty information. He always chooses to do the right thing, and to trust others.
This is important, because the way the game ends is crucial to the narrative of the story and the underlying moral. Solus may have nearly saved the world, despite his questionable methods, but he would have failed at the end. Molstrom would have found him and easily defeated him, taking all the relics and then destroying whatever was left of the world as it rose into space. Only after Solus explained everything to Otus was Otus able to finally do the right thing for the right reasons, thanks to the friends he made on his journey holding Molstrom back. By doing this, Solus allows Otus to atone for failing to secure any of the relics, and Otus absolves Solus of the evil he had committed in the name of saving the world by annihilating Molstrom with the anti-hex.
The whole point of this story is that it doesn't matter how many times you fail, so long as you eventually succeed. Otus may have failed to save the world over and over and over, but at the very end, as the world is coming apart at the seams, he is finally able to succeed, and that's what matters. This moral hit me particularly hard because I instantly recognized what it represented - failing to ship a game over and over and over. Owlboy's story is an allegory for it's own development, and on a broader scale, any large creative project that has missed deadlines and is falling behind. It doesn't matter how many times you've failed, because as long as you keep trying, eventually you'll succeed, and that's what really matters. The game acknowledges that humans are deeply flawed creatures, but if we work together, we can counteract each other's flaws and build something greater than ourselves.
This is why I find it depressing that so many people object to Solus' behavior. Surely, no one would actually do that? However, at the beginning of the game, Solus' defining character moment is being bullied and abused by the other owls. His response to this abuse is to become withdrawn, trusting no one, determined to do whatever is necessary without relying on anyone else. This is exactly what happens to people who have been abused or betrayed and develop trust issues. These people will refuse help that they are offered, pushing other people away, often forcing an intervention to try and break through the emotional wall they have built to try and keep themselves from being hurt again. That's why you have to fight Solus first, because he's spent his whole life building a mental block to keep other people out, and Otus has to break through it to force him to finally accept help.
Far from being unbelievable, Owlboy's plot is entirely too real, and like any good work of art, it forces us to confront truths that make us uncomfortable. Owlboy forces us to confront the tragedy of human nature, and deal with the consequences. At the same time, it shows us that, if we persevere, we can overcome our flaws, and build a better world, together.
On the pirate mothership, it is revealed that the titular "owlboy" is not referring to Otus, but is actually referring to the mysterious cloaked figure, who is actually Solus. Otus merely serves as a distraction so that Solus could steal the relics from the pirates. Once the heroes follow Solus up to Mesos and beat him into submission, it is revealed that Solus masterminded the events of the entire game, promising the pirates power in exchange for retrieving the relics, then using Otus as a distraction to steal the relics and use them to power the Anti-Hex to save the world.
Unfortunately, as the heroes immediately point out, this resulted in the destruction of Advent and the deaths of countless innocent people. Had Solus just asked for help, all of this could have been avoided, and the world could have been saved without incident. Solus admits that he felt he had no choice, as he simply had nobody he could trust. Our heroes offer to help finish the ritual, at which point Molstrom shows up and ruins everyone's day. Solus' methods have finally backfired on him, and he is now too injured to complete the ritual—but Otus isn't. In an act of desperation, he imbues Otus with the power of the artifacts, and Otus is able to complete the anti-hex in his stead, obliterating Molstrom in the process and saving the world.
A lot of people take issue with this ending for two reasons: One, it means almost everything you fought for in the game technically meant nothing, because you are actually working against Solus the entire time. Two, the entire thing could have been avoided if Solus had just trusted someone, anyone, instead of engineering a ridiculously convoluted plot to get what he needed through deceit and betrayal. Otus and Solus here represent a hero and an anti-hero. They are both fundamentally good people with flawed goals for opposite reasons. Solus is doing the right thing for the wrong reasons, whereas Otus is doing the wrong thing for the right reasons. Solus is doing the right thing, which is saving the world, but he achieves it by sacrificing thousands of innocent lives and betraying everyone. Otus is unknowingly dooming the world to destruction, but only because he and everyone else is operating on faulty information. He always chooses to do the right thing, and to trust others.
This is important, because the way the game ends is crucial to the narrative of the story and the underlying moral. Solus may have nearly saved the world, despite his questionable methods, but he would have failed at the end. Molstrom would have found him and easily defeated him, taking all the relics and then destroying whatever was left of the world as it rose into space. Only after Solus explained everything to Otus was Otus able to finally do the right thing for the right reasons, thanks to the friends he made on his journey holding Molstrom back. By doing this, Solus allows Otus to atone for failing to secure any of the relics, and Otus absolves Solus of the evil he had committed in the name of saving the world by annihilating Molstrom with the anti-hex.
The whole point of this story is that it doesn't matter how many times you fail, so long as you eventually succeed. Otus may have failed to save the world over and over and over, but at the very end, as the world is coming apart at the seams, he is finally able to succeed, and that's what matters. This moral hit me particularly hard because I instantly recognized what it represented - failing to ship a game over and over and over. Owlboy's story is an allegory for it's own development, and on a broader scale, any large creative project that has missed deadlines and is falling behind. It doesn't matter how many times you've failed, because as long as you keep trying, eventually you'll succeed, and that's what really matters. The game acknowledges that humans are deeply flawed creatures, but if we work together, we can counteract each other's flaws and build something greater than ourselves.
This is why I find it depressing that so many people object to Solus' behavior. Surely, no one would actually do that? However, at the beginning of the game, Solus' defining character moment is being bullied and abused by the other owls. His response to this abuse is to become withdrawn, trusting no one, determined to do whatever is necessary without relying on anyone else. This is exactly what happens to people who have been abused or betrayed and develop trust issues. These people will refuse help that they are offered, pushing other people away, often forcing an intervention to try and break through the emotional wall they have built to try and keep themselves from being hurt again. That's why you have to fight Solus first, because he's spent his whole life building a mental block to keep other people out, and Otus has to break through it to force him to finally accept help.
Far from being unbelievable, Owlboy's plot is entirely too real, and like any good work of art, it forces us to confront truths that make us uncomfortable. Owlboy forces us to confront the tragedy of human nature, and deal with the consequences. At the same time, it shows us that, if we persevere, we can overcome our flaws, and build a better world, together.
February 1, 2017
DirectX Is Terrifying
About two months ago, I got a new laptop and proceeded to load all my projects on it. Despite compiling everything fine, my graphics engine that used DirectX mysteriously crashed upon running. I immediately suspected either a configuration issue or a driver issue, but this seemed weird because my laptop had a newer graphics card than my desktop. Why was it crashing on newer hardware? Things got even more bizarre once I narrowed down the issue - it was in my shader assignment code, which hadn't been touched in almost 2 years. While I initially suspected a shader compilation issue, there was no such error in the logs. All the shaders compiled fine, and then... didn't work.
Now, if this error had also been happening on my desktop, I would have immediately started digging through my constant assignments, followed by the vertex buffers assigned to the shader, but again, all of this ran perfectly fine on my desktop. I was completely baffled as to why things weren't working properly. I had eliminated all possible errors I could think of that would have resulted from moving the project from my desktop to my laptop: none of the media files were missing, all the shaders compiled, all the relative paths were correct, I was using the exact same compiler as before with all the appropriate updates. I even updated drivers on both computers, but it stubbornly refused to work on the laptop while running fine on the desktop.
Then I found something that nearly made me shit my pants.
For almost 2 years, I had been feeding DirectX total bullshit, and had even tested it on multiple other computers, and it had never given me a single warning, error, crash, or any indication whatsoever that my code was completely fucking broken, in either debug mode or release mode. Somehow, deep in the depths of nVidia's insane DirectX driver, it had managed to detect that I had just tried to assign a vertex shader to a pixel shader, and either ignored it completely, or silently fixed my catastrophic fuckup. However, my laptop had the mobile drivers, which for some reason did not include this failsafe, and so it actually crashed like it was supposed to.
While this was an incredibly stupid bug that I must have written while sleep deprived or drunk (which is impressive, because I don't actually drink), it was simply impossible for me to catch because it produced zero errors or warnings. As a result, this bug has the honor of being both the dumbest and the longest-living bug of mine, ever. I've checked every location I know of for any indication that anything was wrong, including hooking into the debug log of directX and dumping all it's messages. Nothing. Nada. Zilch. Zero.
I've heard stories about the insane bullshit nVidia's drivers do, but this is fucking terrifying.
Alas, there is more. I had been experimenting with direct2D as an alternative because, well, it's a 2D engine, right? After getting text rendering working, a change in string caching suddenly broke the entire program. It broke in a particularly bizarre way, because it seemed to just stop rendering halfway through the scene. It took almost an hour of debugging for me to finally confirm that the moment I was rendering a particular text string, the direct2D driver just stopped. No errors were thrown. No warnings could be found. Direct2D's failure state was apparently to simply make every single function call silently fail with no indication that it was failing in the first place. It didn't even tell me that the device was missing or that I needed to recreate it. The text render call was made and then every single subsequent call was ignored and the backbuffer was forever frozen to that half-drawn frame.
The error itself didn't seem to make any more sense, either. I was passing a perfectly valid string to Direct2D, but because that string originated in a different DLL, it apparently made Direct2D completely shit itself. Copying the string onto the stack, however, worked (which itself could only work if the original string was valid).
The cherry on top of all this is when I discovered that Direct2D's matrix rotation constructor takes degrees, not radians, like every single other mathematical function in the standard library. Even DirectX takes radians!
WHO WRITES THESE APIs?!
Now, if this error had also been happening on my desktop, I would have immediately started digging through my constant assignments, followed by the vertex buffers assigned to the shader, but again, all of this ran perfectly fine on my desktop. I was completely baffled as to why things weren't working properly. I had eliminated all possible errors I could think of that would have resulted from moving the project from my desktop to my laptop: none of the media files were missing, all the shaders compiled, all the relative paths were correct, I was using the exact same compiler as before with all the appropriate updates. I even updated drivers on both computers, but it stubbornly refused to work on the laptop while running fine on the desktop.
Then I found something that nearly made me shit my pants.
if(profile <= VERTEX_SHADER_5_0 && _lastVS != shader) {
//...
} else if(profile <= PIXEL_SHADER_5_0 && _lastPS != shader) {
//...
} else if(profile <= GEOMETRY_SHADER_5_0 && _lastGS != shader) {
//...
}
Like any sane graphics engine, I do some very simple caching by keeping track of the last shader I assigned and only setting the shader if it had actually changed. These if statements, however, have a very stupid but subtle bug that took me quite a while to catch. They're a standard range exclusion chain that figures out what type of shader a given shader version is. If it's less than say, 5, it's a vertex shader. Otherwise, if it's less than 10, that this means it's in the range 5-10 and is a pixel shader. Otherwise, if it's less than 15, it must be in the range 10-15, ad infinitum. The idea is that you don't need to check if the value is greater than 5 because the failure of the previous statement already implies that. However, adding that cache check on the end breaks all of this, because now you could be in the range 0-5, but the cache check could fail, throwing you down to the next statement checking to see if you're below 10. Because you're in the range 0-5, you're of course below 10, and the cache check will ALWAYS succeed, because no vertex shader would ever be in the pixel shader cache! All my vertex shaders were being sent in to directX as pixel shaders after their initial assignment!For almost 2 years, I had been feeding DirectX total bullshit, and had even tested it on multiple other computers, and it had never given me a single warning, error, crash, or any indication whatsoever that my code was completely fucking broken, in either debug mode or release mode. Somehow, deep in the depths of nVidia's insane DirectX driver, it had managed to detect that I had just tried to assign a vertex shader to a pixel shader, and either ignored it completely, or silently fixed my catastrophic fuckup. However, my laptop had the mobile drivers, which for some reason did not include this failsafe, and so it actually crashed like it was supposed to.
While this was an incredibly stupid bug that I must have written while sleep deprived or drunk (which is impressive, because I don't actually drink), it was simply impossible for me to catch because it produced zero errors or warnings. As a result, this bug has the honor of being both the dumbest and the longest-living bug of mine, ever. I've checked every location I know of for any indication that anything was wrong, including hooking into the debug log of directX and dumping all it's messages. Nothing. Nada. Zilch. Zero.
I've heard stories about the insane bullshit nVidia's drivers do, but this is fucking terrifying.
Alas, there is more. I had been experimenting with direct2D as an alternative because, well, it's a 2D engine, right? After getting text rendering working, a change in string caching suddenly broke the entire program. It broke in a particularly bizarre way, because it seemed to just stop rendering halfway through the scene. It took almost an hour of debugging for me to finally confirm that the moment I was rendering a particular text string, the direct2D driver just stopped. No errors were thrown. No warnings could be found. Direct2D's failure state was apparently to simply make every single function call silently fail with no indication that it was failing in the first place. It didn't even tell me that the device was missing or that I needed to recreate it. The text render call was made and then every single subsequent call was ignored and the backbuffer was forever frozen to that half-drawn frame.
The error itself didn't seem to make any more sense, either. I was passing a perfectly valid string to Direct2D, but because that string originated in a different DLL, it apparently made Direct2D completely shit itself. Copying the string onto the stack, however, worked (which itself could only work if the original string was valid).
The cherry on top of all this is when I discovered that Direct2D's matrix rotation constructor takes degrees, not radians, like every single other mathematical function in the standard library. Even DirectX takes radians!
WHO WRITES THESE APIs?!