## September 28, 2015

### There Will Never Be One True Programming Language

A disturbing number of otherwise intelligent programmers seem to believe that, one day in the distant future, everyone will use the same magical programming language that solves everyone's problems at the same time. Often, this involves garbage collection, with lots of hand-waving about computers built with Applied Phlebotinum.

For the sake of argument, let's assume it is the year 30XX, and we are a budding software developer on Mars. We have quantum computers that run on magic pixie dust and let us calculate almost anything we want as fast as we want so long as we don't break any laws of physics. Does everyone use the same programming language?

No. Obviously, even if we built quantum computers capable of simulating a classical computer a thousand times faster than normal for some arbitrary reason[1], we would still want to write software in a way that was easy to comprehend, and qubits are anything but. So, the standard programming language our Mars programmer would be using would not interact with the quantum computer at all. Perhaps it would be some form of functional language, which the cool kids seem to think will save the entire software industry and make everything faster, better, more reliable and probably cure cancer in the process[2].

But now, this same programming language that allows you to ignore the quantum computer must also be used to write it's own compiler that runs on... a quantum computer. So it needs to simultaneously be an elegant language for writing quantum programs. It also needs to be the perfect language for writing games and business applications and hypernet sites and machinery and brain implants and rocket trajectories and robot AI and planetary weather simulators and life support systems and hologram emittors and—

... Wouldn't it be a lot easier to just have domain-specific languages?

If you want to get technical, you can do all that with Lisp. It's just lambda calculus. It's also a giant pain in the ass when you use it for things that it was never meant for, like low level drivers or game development. You could try extending Lisp to better handle those edge-cases, and that's exactly what many lisp-derivatives do. The problem is that we will always be able to find new edge-cases. So, you would have to continue bolting things on to your magical language of ultimate power, playing a game of context-free whack-a-grammar for the rest of eternity.

This is ultimately doomed to fail, because it ignores the underlying purpose of programming languages: A programming language is a method of communication. Given two grammars, one that can do everything versus one that can do the one thing you're trying to do really well, which one are you going to pick for that particular project? Trying to get everyone to use the same language while they are trying to solve radically different problems is simply bad engineering. Do you really want to write your SQL queries using Lisp? You can only stuff so many tools into a swiss army knife before it becomes completely impractical.

Of course, a language that could do everything would also allow you to define any arbitrary new language within it, but this is equivalent to building an entire new language from scratch, because now everyone who wants to contribute to your project has to learn your unique grammar. However, having a common ground for different languages is a powerful tool, and we are headed in that direction with LLVM. You wouldn't want to write a program in LLVM, but it sure makes writing a compiler easier.

Instead of a future with one true programming language, perhaps we should be focused on a future with standard low-level assembly. A future with many different languages that can all talk to each other. A future where we don't need to argue about which programming language is the best, because every language could be used for whatever task it is best suited for, all in the same project. Different compilers could interpret the standard low-level assembly in their own ways, optimizing for different use-cases.

A universal standard for low-level assembly would solve everyth—

... Actually, nevermind[3].

1
As of right now, this is not actually true for the vast majority of computations we know how to do with qubits. We know that certain classes of problems are exponentially faster with quantum computers, but many other functions would get a mere linear increase in speed, at most. Whether or not this will change in the future is an active area of research.
2 They totally didn't say the same thing about object-oriented programming 30 years ago.
3 </sarcasm>

## August 5, 2015

### Abortion Has No Moral High Ground

June 26, 2015 was a historic moment. That was the day the Supreme Court legalized gay marriage on a federal level in the United States. As far as I'm concerned, this victory was the inevitable result of a point of view that relied entirely on logical fallacies and irrational hatred. Yet, it was a battle that was fought for centuries, and continues on, as obstinate conservatives vow to continue fighting to the bitter end. The fact that gay rights was ever a political issue will be considered barbaric by future civilizations, because being gay is inherently harmless, and attempting to control who people are allowed to love is an egregious violation of individual rights.

It was this innate emotional connection that won the battle. Every human being on this planet can empathize with the desire to love freely. We were united in our desire for love to win, just like it does in all of our fairy tales.

Abortion, on the other hand, is likely to remain an issue for hundreds of years. I'm not sure if it will ever be put to rest until we invent technology that renders the entire debate irrelevant. The problem is that, with abortion, there is no moral high ground. This is because, under all circumstances, abortion leaves us with two choices: we can pick an arbitrary point in time where a fetus suddenly has rights, or we will be forced to violate someone's rights no matter what option we choose.

First, let me clarify why we cannot base the point at which a fetus suddenly gains rights on any meaningful science: babies do not have on switches. Conciousness is an emergent phenomenon, and we have known for years that animals have some degree of awareness. They have a consciousness much like we do, but operating on a lower level (in some cases, it may simply be on a different level). Babies can't even pass the mirror test until they're about 18 months old. Because of this, we will be forced to pick some completely arbitrary point in time at which the baby suddenly gains rights, if we want to avoid violating those rights.

Now, if we don't do that (and I find it incredibly hard to believe that we will, given the number of people who think that a fertilized egg has human rights), we have two choices:
1. We forbid abortion, violating both the mother's right to her own body and her right to live (0.019% of pregnancies are fatal).
2. We allow abortion, violating the baby's right to live.

There is no way out of this. We cannot use logic to pick a point in time where the baby suddenly gains rights because it will inherently be arbitrary. We'd basically have to say "Well, if you're 80% human, we'll give you rights, because, well, 80% sounds like a good number." No matter how much reasoning we have behind that number, it's still fundamentally arbitrary and therefore it will always be a valid argument to say it is morally wrong. This means we are forced to violate someone's rights no matter what we do. Some people think the baby is more important. Some people think the women is more important. Some people think whose important depends on how far along the baby is. There is no nice, clean solution to this. We have to screw somebody over no matter what we do.

My personal view, which is that we should allow abortions only in the first trimester, is not born out of some moral argument. It is simple engineering pragmatism: it is the least objectionable and most practical solution that I can come up with. I am forced to fall back to my engineering background for this problem because there is no valid moral solution. If someone asks me why I think it's okay that we're killing a fetus, I would say that it's not okay. However, it's also not okay to deny a women the right to her own body. It's not okay to allow unwanted pregnancies to result in unwanted children that get left in orphanages. It's not fair that the hypothetical wanted child that could have been born later will now never be born. No part of this situation is ever okay, on either side of the debate.

If we want to resolve the debate, society as a whole simply has to try and figure out how to minimize the damage. That is all we can do. There is no right answer. There are only wrong answers, and we have to pick the least wrong one.

## July 28, 2015

### Why I'll Never Get The Life I Wanted

"We must let go of the life we have planned, so as to accept the one that is waiting for us." - Joseph Campbell
I never liked money. It was always an inconvenience, a roadblock that had to be maneuvered around to get to where I actually wanted to go. For a long time, I had a nice life for myself planned out. I'd take a few consulting jobs and travel the world, staying with friends and squeezing by on a slim budget. In my spare time, I could develop open-source software and give it away for free. I could focus on making the world a better place, instead of making a ton of money that served no purpose.

I'd never need to be well known. I could simply live the life of a nomad, earning just enough for food and maybe the occasional roof, when I wasn't crashing at a friend's place. I never wanted anything else. I wanted freedom. I wanted to explore. I wanted to find beautiful places in nature, and to build cool things with code. Take some pictures, write some music. Maybe learn to draw. A carefree life where I didn't worry about climbing some ridiculous corporate ladder, or proving myself. A life where I could exist outside of the toxic society that I was born into.

There is a bitter irony in my current situation. After a year-long stint at a large software corporation that paid me absurd amounts of money, I now have plenty of savings to pursue the life I had always wanted...

But I can't. Or perhaps, I won't. There's nothing actually stopping me. In fact, later this year, I intend to go on a nice vacation across the west coast, while I still can. So I can at least experience what my life might have been for a brief moment, before I am dragged back into hell. A hell where my friends can't afford to let me stay at their houses for extended periods of time, because most of them live in poverty. A hell where income inequality lets me do whatever I please by taking everything from those less fortunate than me. A world so full of shit I can't simply ignore it.

There can be no carefree lifestyle, because it is nothing more than an illusion. I refuse to live a life that no one else can simply because they weren't born into a rich, stable middle-class family in a wealthy city. When I was young, I thought I was choosing this life for myself. I thought I was choosing to eschew consumerism and materialistic rewards for a more fulfilling life because I was making a wise choice. Instead, my ability to ignore our toxic society was simply because I was born in the right place. Other people don't try to climb corporate ladders because they like to, they do it because they have to.

I'm lucky. I don't. I can do whatever I want. Not because I'm smarter, or better, or wiser, but simply because I was given the opportunity. I've met a lot of people now who want what I want. They want to make the same choices I would. Their hearts are in the right places, but they're all living in poverty. One has a crippling auto-immune disorder. One lives with abusive parents. Half of them are gay. They all wanted to make the right choices, but they couldn't, because they weren't lucky. Because they got the short straw. Because life decided to fuck them over.

This isn't what I wanted. I wanted to make the world a better place. I thought I could do that by avoiding Wall Street and it's disgusting corporate corruption and greed. I thought anyone could live a quiet, frugal life if they just avoided consumerism and greed. I thought there would be other intelligent rich people who would strive to make the world a better place. I thought good things happened to good people.

I was an idiot. The only way a rich person is going to actually make the world a better place is if I do. The only way my friends will ever get a job that isn't horrible is if I build a billion dollar company and hire them myself. The only way to make the future a place I'd want to live in is if I drag the world into it kicking and screaming. The only way we'll get programs that aren't giant towers of duct tape and prayers is if I force people to actually use good engineering practices.

I grew up believing the world was a magical place where people got jobs and lived the lives they wanted. I understand now that, if I truly believe in that world, I have to make it happen myself. It's not the life I had planned, but life is like an improvisation. If you try to play the song you intended, you'll miss out on all the opportunities created by your mistakes. If I must throw myself into building a business, then so be it.

It's not like I can do anything else; I'd never forgive myself if I simply rode off into the horizon, ignoring the plights of all the human beings I cared about.

## July 21, 2015

### I Tried To Install Linux And Now I Regret Everything

I am going to tell you a story.

This story began with me getting pissed off at Windows for reasons that don't really need to be articulated. Just, pick your favorite reason to hate Windows, and let's pretend that was the tipping point. I had some spare space on my secondary drive, so I decided to give Linux another whirl. After all, it had been three or four years since I last attempted anything like this (it's been so long I don't rightly remember the last time I tried), and people have been saying that Linux has gotten a lot better recently!

The primary reason I was attempting this is because of Valve's attempts to move to linux, which have directly resulted in much better nVidia driver support. Trying to install proper nVidia drivers is the thing that wrecked my last attempt. This time, I figured, I could just install the nVidia drivers straight from the repo and everything would be hunky dory, and I could just spend the rest of my time beating on the rest of linux with a comically oversized wrench until it did what I wanted.

I'd had good experiences with XFCE and Fedora on my Linux VM, but I didn't really like Fedora itself (but it interfaced very well with my VM environment). I wanted to install Ubuntu, because it has the best support and I don't like trying to dig through arcane forum posts trying to figure out why my computer screen has suddenly turned into an invisible pink unicorn. Unfortunately, Ubuntu is a bloated mess, and I hate it's default desktop environment. In the past, I had tried Linux Mint, which had been okay, but support had been shaky. I spotted Lubuntu, which is supposed to be a lightweight ubuntu on top of LXDE, a minimal window manager similar to XFCE. This was perfect! So I downloaded Lubuntu 15.04 and installed it on my secondary drive and everything was nice and peachy.

Well, until Linux started, anyway. The first problem was that the package manager in pretty much every linux distro lists all the packages, including all the ones I don't understand. I was trying to remove some pre-included Paint application and it had two separate packages, one named after itself, and one named <name>-common, but the common package wasn't automatically removed when the program was! The nVidia packages also had both an nvidia-drivers-340[1] package, and an nvidia-drivers-340-update package, both of which had identical descriptions. I just went with the most basic one because it seemed sensible, but I felt really sorry for someone less tech savvy trying to find anything in that godforsaken list.

So after everything updated and I restarted and told the package manager to start installing my nVidia drivers, I started noticing annoying things about LXDE. A lot of annoying things. Here, let me list them for you:

• The file manager, when unmounting anything, would helpfully close itself.
• Trying to change the time to not display military time involves editing some arcane string whose only value is %r, and I still don't know what that means and don't want to know what that means. All I wanted to do was change it to say AM or PM!
• In order to get a shortcut onto the desktop, you had to right-click the menu item and then a new menu would show up that would let you add it to the desktop. You can't drag anything out of the start menu.
• The shortcuts seemed disturbingly fickle, occasionally taking 4 rapid clicks to get things to start up.
• Steam simply didn't start at all. Unfortunately, we will never know why.
• Skype managed to spawn a window halfway above the top of the screen, which resulted in me having to look up alt+space in order to rescue it.
• Skype also cut off everything below the baseline of the bottom-most line of text, so you couldn't tell if something was an i or a j.
• Like in Windows, in Linux, modal dialogs have a really bad habit of sucking up keyboard focus, focusing on a button, and then making my next keystroke click the button. The fact that this is a widespread UI problem across most operating systems is just silly.

There were more nitpicks, but I felt like a large number of these issues could have probably be resolved by switching to a better window manager. I was honestly expecting better than this, though. LXDE was supposed to be like XFCE, but apparently it's actually XCFE with all it's redeeming qualities removed. However, it turns out that this doesn't matter! To discover why, we have to examine what happened next.

My friend suggested to get Linux Mint, which has a better window manager and would probably address most of those issues. So, I downloaded the ISO. I already had a USB set up for booting that had Lubuntu on it, so I wanted to see if I could just extract the contents of the ISO and put them on the USB stick. I have no idea if this would have actually worked or not, and I never got to find out because upon clicking and dragging the contents of the ISO out of the archive manager, with the intent of extracting them, the entire system locked up.

Now, I'm not sure how stupid trying to drag a file out of an ISO was, but I'm pretty sure it shouldn't render my entire computer unusable. This seems like an unfair punishment. Attempts to recover the desktop utterly failed, as did ctrl-alt-del, alt-F2, alt-anything else, or even trying to open a terminal (my friend would later inform me that it is actually ctrl-alt-F2 that opens the terminal, but I couldn't ask him because the entire desktop had locked up!). So I just restarted the machine.

That was the last time I saw my Lubuntu desktop.

Upon restarting my machine, I saw the Lubuntu loading screen come up, and then... nothing. Blackness. About 20 seconds later, an error message pops up: "Soft Lock on CPU#0!". Then the machine rebooted.

After spending just one hour using linux, it had bricked itself.[2] I hadn't even gotten steam working yet, and now it was completely unusable. This is not "getting better", this is strapping a rocket to your ass and going so fast in the wrong direction you break the sound barrier. Now, if you have been paying attention, you will note that I had just finished installing my nVidia drivers before the desktop locked up, and that the drivers probably wouldn't actually be used by the desktop environment until the process was restarted. After mounting my linux partition in windows and extracting my log files, I sent them to my friend, who discovered that it had indeed been the nVidia driver that had crashed the system.[3]

This is problematic for a number of reasons, because those drivers were written for Ubuntu based distros, which means my system could potentially lock up if I installed any other ubuntu based distro, which is... just about all the ones that I cared about. Including Linux Mint. At this point, I had a few options:

1) Install another ubuntu based distro anyway and hope it was a fluke.
2) Install something based on Debian or maybe Fedora.
3) Use the open-source drivers.
4) Fuck this shit.

Unfortunately, I have little patience left at this point, and after seeing Linux first lock up after trying to drag files before bricking itself, I don't really have much confidence in the reverse-engineered open-source nVidia drivers, and I certainly am not going to entrust my video card to them in the hopes it doesn't melt. I really, really don't want to play whack-a-mole with Linux distros, trying to find the magical one that wouldn't wreck my graphics card, so I have simply opted for option 4.

But it doesn't end there.

About two or three hours after this all happened (I had been fairly distracted, and by distracted I mean ), I noticed that my windows clock was way off. At this point, I remembered something my friend had told me about - Linux, like any sane operating system would, sets the hardware clock to UTC and then modifies it based on the timezone. Windows, on the other hand, decided it would be a fantastic idea to set the hardware clock to local time. Of course, this should have been an easy fix. I just set the time back, forced an update, and bam, time was accurate again. Crisis averted, right?

No, of course not. Linux had not yet finished punishing me for foolishly believing I was worthy of installing something of it's calibre. Because then I opened Skype, and started receiving messages in the past. Or more accurately, my chat logs were now full of messages that had been sent tomorrow.

It was at this point I realized what had happened. It wasn't that the timezone had been changed, or something reversible like that. The hardware clock itself had been modified to an incorrect value. Skype had spent the past two hours happily sending messages with a timestamp 8 hours in the future because some idiot at Microsoft thought it was a good idea to set the hardware clock to local time, and now all these incorrect client side timestamps had been propagated to the cloud and synced across all my devices.

I slowly got up from my computer. I walked over to my bed, lied down, curled into a ball, and wept for the future of mankind.

1 I may have gotten these package names slightly wrong, but I can't verify them because Linux won't boot up!
2 It's probably entirely possible to rescue the system by mounting the file system and modifying the x config file to not use nvidia, but this was supposed to just be "install linux and set it up", not "install linux and spend the next 5 hours frantically trying to get it to boot properly."
3 After submitting my log files to #ubuntu to report the issue, my friend discovered that the drivers for ATI/AMD graphics drivers are also currently broken for many ubuntu users. What timing! This certainly makes me feel confident about using this operating system!

## June 15, 2015

### We Aren't Designing Software For Robots

"I have the solution, but it only works in the case of a spherical cow in a vacuum." - old physics proverb
Whenever I am designing something, be it an API or a user interface, I try to remember that I am not designing this for a perfectly rational agent. Instead, I am designing software for a bunch of highly emotional, irrational creatures called human beings, all of whom have enormously different tastes. I try to include options, or customization, or if that isn't possible, a compromise. I try to keep the door open, to let my software be used as a tool to enhance someone's productivity no matter what workflow they use, instead of trying to impose my own workflow on them.

For some reason, many programmers seem to struggle with the concept of people being different. They get hung up on this naïve concept of right or wrong, as if life is some kind of mathematical equation that has a closed form solution. Let me say right now that any solution to life is going to be one heck of a chaotic nonlinear PDE, which won't have any closed form solution at all, and certainly not one using elementary functions. When you are developing software, you must keep in mind the range of customers who will be using your product, whether they are office workers or fellow programmers.

Maybe someone is using your product to try and finish a presentation in time to go home and catch a nap before they get up to work their second job so they can support a wife and a screaming baby. Someone else might use your product to track their progress as they try to revolutionize search from their bedroom instead of study for their finals next week. Someone else might be an elderly man trying to figure out how his vacation is going to go.

We are all different. We arise from all walks of life and are bound together in a great journey on this blue ball hurtling through space. It is not cowardice when two people try to put aside their differences and work together, it is strength. It requires enormous courage to admit that there are no simple answers in life. There are no answers in the back of the textbook. There are many different answers, all different in subtle ways, all suitable for slightly different circumstances, all both right and wrong in their own, strange, quirky ways.

Some programmers seem distressingly incapable of compassion or empathy. Many claim to live through the cold hard logic of data, without apparently realizing that data itself is inherently meaningless. It is only given meaning through a human's interpretation, and a human can interpret data to mean whatever they want. They seem to think they can solve problems by reducing everyone into neat little collections of numbers that can be easily analyzed. It's certainly a lot less frustrating to work with real numbers instead of real people, but inevitably, a programmer must come to terms with the fact that life is about human beings, not numbers on a screen.

The cold hard logic of our code is good for speaking to computers—it is not good for speaking to other human beings.

## May 30, 2015

### Using Data To Balance Your Game: Pony Clicker Analysis

The only thing more addicting than heroine are numbers that keep getting larger.

Incrementer and idle games are seemingly simplistic games where you wait or click to increase a counter, then use that counter to buy things to make the counter go up faster. Because of the compounding effects involved, these types of games inevitably turn into a study of growth rates and how different functions interact. Cookie Clicker is perhaps the most well-known, which employs an exponential growth curve for the costs of buildings that looks like this:
$Cost_n = Cost_0\cdot 1.15^n$
Where $Cost_0$ is the initial cost of the building. Each building, however, has a fixed income, and so the entire game is literally the player trying to purchase upgrades and buildings to fight against an interminable exponential growth curve of the cost function. Almost every single feature added to Cookie Clicker is yet another way to battle the growth rate of the exponential function, delaying the plateauing of the CPS as long as possible. This includes the reset functionality, which grants heavenly chips that yield large CPS bonuses. However, no feature can compensate for the fact that the buildings do not have a sufficient growth rate to keep up with the exponential cost function, so you inevitably wind up in a dead end where it becomes almost impossible to buy anything in a reasonable amount of time regardless of player action.

Pony Clicker is based off Cookie Clicker, but takes a different approach. Instead of having set rates for each building, each building generates a number of smiles based on the number of ponies and friendships that you have, along with other buildings that "synergize" with that building. The more expensive buildings generate more smiles because they have a higher growth rate than the buildings below them. This makes the game extremely difficult to balance, because you only have equations and the cost curves to work with, instead of simply being able to set the per-building SPS. Furthermore, the SPS of a building continues to grow and change over the course of the game, further complicating the balance equation. Unfortunately, in the first version of the game, the growth rate of the end building exceeded the growth rate of the cost function, which resulted in immense end-game instability and all around unhappiness. To address balance problems in pony clicker, rather than simply throwing ideas at the wall and trying to play test them infinitely, I wrote a program that played the game for me. It uses a nearly optimal strategy of buying whatever the most efficient building is in terms of cost per +1 SPS increase. This is not a perfectly optimal strategy, which has to take into account how long the next building will need to take, but it was pretty close to how players tended to play.

Using this, I could analyze a game of pony clicker in terms of what the SPS looked like over time. My first graph was not very promising:

The SPS completely exploded and it was obviously terrible. To help me figure out what was going on, I included a graph of the optimal store purchases and the time until the next optimal purchase. My goal in terms of game experience was that no building would be left behind, and that there shouldn't be enormous gaps between purchases. I also wanted to make sure that the late game or the early game didn't take too long to get through.

In addition to this, I created a graph of the estimate SPS generation of each individual building, on a per-friendship basis. This helped compensate for the fact that the SPS changed as the game state itself changed, allowing me to ensure the SPS generation of any one building wasn't drastically out of whack with the others, and that it increased on a roughly linear scale.

This information was used to balance the game into a much more sane curve:

I then added upgrades to the main graph, and quickly learned that I was giving the player certain upgrades way too fast:

This was used to balance the upgrades and ensure they only gave a significant SPS increase when it was really needed (between expensive buildings, usually). The analysis page itself is available here, so you can look at the current state of pony clicker's growth curve.

These graphs might not be perfect, but they are incredibly helpful when you are trying to eliminate exponential explosions. If you got a value that spirals out of control, a graph will tell you immediately. It makes it very easy to quickly balance purchase prices, because you can adjust the prices and see how this affects the optimal gameplay curve. Pony Clicker had so many interacting equations it was basically the only way i could come up with a game that was even remotely balanced (although it still needs some work). It's a great example of how important rapid feedback is when designing something. If you can get immediate feedback on what changing something does, it makes the process of refining something much faster, which lets you do a better job of refining it. It also lets you experiment with things that would otherwise be way too complex to balance by hand.

## May 14, 2015

### Pony Clicker Postmortem

Never Again...
Pony Clicker was intended to be a fun experiment in designing an HTML5 game. It was developed over a period of about 2 weeks, which was a lot longer than I had anticipated. Normally I build games using low level graphics APIs and highly optimized physics engines, so I wanted to try something that would be simple, where I could rely on HTML5 to do most of the work for me... right?

Wrong. The key thing I learned when designing Pony Clicker was that HTML is evil. If you are making just about anything even remotely interactive in HTML, I strongly recommend using the HTML5 canvas. Everything else will almost inevitably fall over. CSS animations simply aren't going to cut it, and the eccentricities of HTML rendering cause enormous problems with game interfaces. Save yourself the pain and just slap a giant canvas on the screen and render everything to it. Even with the known performance issues with the HTML5 canvas, it will probably still be faster than HTML5 anyway, except now you have much more control over what everything is doing.

I also managed to find a memory leak of sorts in Chrome's DOM renderer. For this reason, Pony Clicker will always take 550 MB of memory while actively using it, until the GC wakes up and actually does its job. The exact details of this are complicated, but the gist of it is that I can create a page that contains no javascript, only a <canvas> element and a <div> element below it with an :active effect, and by clicking on the div element that does absolutely nothing, I can make chrome allocate 20 megs of memory each time. It would be funny if it wasn't so horrifying. I'll write up a seperate blog for that issue.

In terms of game design, Pony Clicker is a more complex version of Cookie Clicker. In Cookie Clicker, each building simply gives you more cookies. That's it. In Pony Clicker, you construct a graph of relationships, and then buy buildings that give you smiles based on how many friends, ponies, or other buildings you have. It's basically Graph Theory Meets Growth Rates: The Game, where each successive building utilizes a function with an ever increasing growth rate. Thus, by the time you reach a limited factorial function, each later building is providing enormous numbers of smiles simply because of the generating function's explosive growth.

Predictably, this made balancing the game difficult. At first, I was excited, because I could start using all that crap I learned in Combinatorics and derive a bunch of equations to balance the game for me based on a few curves that I defined. Inevitably, this did not work. Either the equations were too complex to get reasonable solutions out of, or they simply didn't work, because I was relying heavily on heuristic functions to guess how many buildings would be owned at a given point. I ended up using a combination of functions that allowed me to predict the SPS of any building at any given time, and then used this to define the costs of all the buildings in terms of the cost curve of the friendships. Thus, everything in the game is keyed off the friendship cost curve, which can be modeled by a recurrence relation:
$F_{n+1} = r F_n$where r is the curve value (Pony Clicker uses a value of 1.6, because that's close to the golden ratio and it seemed to work nicely). This is a trivial linear recurrence relation, so we can get a closed form solution out of it:
$F_n = F_0 r^n$The same kind of curve is used for just about everything else in the game, including cost curves of the buildings. Cookie Clicker uses the same curve for all it's buildings, where $r = 1.15$. This stops working for Pony Clicker because the later buildings provide ever increasing amounts of smiles, by design. To compensate, the cost curve is much more aggressive for the later buildings. Initial costs were supposed to be chosen based on the number of friendships that would be bought at the time of the initial building price, but this kind of fell apart. However, it was still useful to key the cost off of the friendship curve, so I ended up with a really weird initial price array: [4,12,30,35,45,45,45,51,51,100]

To host the game, I used GitHub's pages, which means it's all being hosted out of a gh-pages branch in the github repo. I commit changes to the master branch, then do a git pull into the gh-pages branch and then git push to sync it with the master branch. So far this has required me to have the deployment branch checked out - if you know of a way to merge changes from one branch into another without cloning that branch, I'd love to know about it. Also, if you want to contribute to the game, with upgrade suggestions, more witty news articles, art, etc., feel free to either send me a message, or just submit a pull request on GitHub!

I used Visual Studio Code to write this project. It's intended for web development, and in fact is just a webpage being rendered in a container. It actually works pretty well for small projects, but once a file exceeds around 800-1000 lines of javascript, it's auto-complete is too busy compiling running around with its hair on fire to be of any real use, which is a shame. I'm kind of wondering why it was implemented as a webpage though. Perhaps to make a point? If they wanted to make the point "web apps are still slow as crap when they need to do anything nontrivial", I guess they succeeded.

## May 8, 2015

### Am I Making The World A Better Place?

A while ago, I watched The Internet's Own Boy, a documentary about Aaron Swartz. It was immensely painful to learn about someone as amazing as he was, someone who seemed to look at the world in the same way I did, after he committed suicide. At 24 minutes, Aaron's brother says something that continues to stick with me.
"The way Aaron always saw it, is that programming is magic—you can accomplish these things that normal humans can't, by being able to program. So, if you had magical powers, would you use them for good, or to make you mountains of cash?" — Ben Swartz
From a young age, I recognized that I had an unfair advantage over my peers. My programming abilities ensured that I would be able to sail through life without ever having to worry about money. I was also aware of oppression. I recognized parallels between the intellectual bullying I was subjected to in middle school and the real world, where I saw powerful people abuse the advantages they had over their peers to make themselves rich at the expense of everyone else. I was told that life simply wasn't fair, and there's nothing you can do about it.

I said that I'd make life fair.

I realized that if someone used their advantage to make things more fair instead of less fair, they'd be able to make the world a better place. Furthermore, I had what Aaron called "magical powers". I already had an advantage. This mirrors Aaron Swartz's own epiphany about using software to do something about serious problems in the real world.
"I feel very strongly that it's not enough to just live in the world as it is and just take what you're given and follow the things that adults told you to do and that your parents told you to do and that society tells you to do. I think you should always be questioning, and take this very scientific attitude that everything you've learned is just provisional, that it's always open to recantation or refutation or questioning, and I think the same applies to society. Once I realized that there were real, serious problems, fundamental problems, that I could do something to address, I didn't see a way to forget that."— Aaron Swartz
Everything I've done since then has been an (occasionally misguided) attempt towards accomplishing this. My singular goal in life became maximizing my positive influence on the world. Of course, I am not Aaron Swartz, and I did not have access to an enormous fortune. This meant making sacrifices in the short-term so that I could pursue my dreams in the long-term, and hopefully have a lasting impact.

A year ago, that meant making a choice. I needed to pay the bills, and so I now make a six figure salary working for a large software company. I am not happy there, which confuses people who think I'm successful. They are mistaken; I am not making the world a better place yet, so I am not successful yet.

While it's easy to determine if you are improving people's lives right now, what about in the long-term? If you spend your entire life helping some kids in Africa versus starting a billion dollar corporation and then hiring thousands of people to help kids in Africa, what had more lasting impact? Trying to think about the future changes how you view things. Building an enormous company with a technology that helps a lot of people and then selling it for a billion dollars to a corporation that immediately proceeds to either shut it down or simply ruin it is short-term thinking. I love the Clock of the Long Now, because it mirrors my efforts to think far in the future, not just a few years. Will your actions have a positive effect on the world in ten years? Twenty? Fifty? How can you choose a path that will ripple across the sands of time, finding ways to help people long after you've died?

This is the question that drives me. How can I change the course of history for the better? How can I maximize my impact? Even if it's only by a fraction of an inch, with our combined efforts, we might one day get there.

...

I'm still at that nameless software corporation. I am still languishing in its depths, unable to work on anything that actually matters because of a particularly annoying non-compete agreement. I won't stay much longer, but now that I am on my last leg, I am beginning to wonder if I have perhaps already stayed too long. How much money do I need to save up? What is the optimal point of departure? Have I already missed it?

Am I still working towards making the world a better place, or am I simply making mountains of cash?

## March 16, 2015

### Is There A Commercial Open Source License?

"Any headline that ends in a question mark can be answered by the word 'No'." - Davis' law
Putting Commercial and Open-Source together is often considered an oxymoron. Part of this is caused by constant confusion between the terms Open-Source and Free Software, which is made even worse by people who have more liberal interpretations of the phrase "Open-Source". In many cases, keeping the source code of a product proprietary serves no other purpose than to prevent people from stealing the product. Putting an application under the GPL and selling it is perfectly reasonable for software aimed at end-users, whom are unlikely to know how to compile the freely available source. Libraries aimed at developers, however, usually must resort to Dual Licensing.

To me, dual licensing is a bit of a hack job. Using two fundamentally incompatible licenses at the same time always rubbed me the wrong way, even if it's perfectly legal for the creator of the software. The other problem is that this is usually done via copyleft licenses, and I would prefer more permissive licenses, given that I only care about whether or not the result is commercial or not. This, however, turns out to be a bit of a problem.

My ideal license for a commercial library would state that anyone is free to modify, distribute, or use the source code in anything they want, provided the resulting source code retains the provided license. If the resulting product is strictly noncommercial, it would be free. If, however, the source code or any derivation of the source code is used in a commercial product, there would be a fee. It'd basically be an MIT license with an added fee for commercial derivative works, with additional restrictions on what commercial derivative works are allowed.

The problem is, even if I were to use a dual-licensing strategy, there is no open-source license that contains a noncommercial restriction, because this isn't open-source. Specifically, it violates Freedom 1 as defined by OSI:
1. The license shall not restrict any party from selling or giving away the software as a component of an aggregate software distribution containing programs from several different sources. The license shall not require a royalty or other fee for such sale.
So, the first sentence of the definition of open-source software has made it abundantly clear that my software isn't open-source because I'm trying to make money off of it. This seems overly restrictive to me, because the source code is still freely available for people to distribute and modify, just not sell. Even then, they can still sell things made with the source code, they just pay a royalty or fee for it.

Now, there are good reasons why actual free software would forbid this. For example, if someone made an open-source library that required a royalty for commercial use whose code was repeatedly modified over and over again until it crept into all sorts of projects, it could then claim royalties on all of those products, even if only a few lines of its source code were used. This is obviously not conducive to creating a culture of sharing code.

However, there is another benefit of open-source software, which is now becoming progressively more obvious. It's the fact that, if you have the source code and compile it yourself, you can verify the NSA (probably) hasn't injected a backdoor into it. This is a benefit of visibility, and I think it should be encouraged. The problem is that the restrictions placed on free software are too harsh for most commercial libraries, who will then often resort to simply being proprietary.

So, sadly, there are no open-source commercial licenses, because those aren't open-source. Perhaps we need a new term, for software whose source is accessible but has usage restrictions. Even then, it's not entirely apparent what such a license would look like, or if sites like GitHub would actually allow it on public repositories. I took a peek at the Unreal Engine 4 license, but despite claiming to have it's source code available on github, it's actually in a private repository you must gain access to. To make matters worse, the actual Unreal Engine 4 license is incredibly restrictive. You are only allowed to distribute the engine code to other people who have agreed to the license! This obviously isn't what I want, but apparently no one else seems to think that software that's kind of open-source is actually valuable.

It's an awful shame, because I really don't want to make my project proprietary, but right now I don't have much choice. As far as I'm concerned, releasing something under a restricted open-source license is preferable to making it entirely proprietary. Unfortunately, the loudest programmers are also the least likely to be willing to compromise over ideological divides.

## February 18, 2015

### Does Anyone Actually Want Good Software?

Are there any programmers left that actually care about writing good software? As far as I can tell, the software development industry has turned into a series of echo chambers where managers scream about new features and shipping software and analyzing feedback from customers. Then they ignore all the feedback and implement whatever new things are supposed to be cool, like flat design, or cloud computing, or software as a service.

The entire modern web is built on top of the worst programming language that's still remotely popular. It's so awful that IE now supports asm.js just so we can use other languages instead. With everyone relentlessly misquoting "Premature optimization is the root of all evil", it's hard to get programmers to optimize any of their code at all, let alone get them to care about things like CPU caches and why allocation on the heap is slow and how memory locality matters.

Some coders exist at large corporations that simply pile on more and more lines of code and force everyone to use gigantic frameworks built on top of more gigantic frameworks built on top of even more gigantic frameworks and then wonder why everything is so slow. Other coders exist in startups that use Scala/Hadoop/Node.js and care only about pumping out features or fixing bugs. The thing is, all of these companies make a lot of money, which leads me to ask, does anyone actually want good software anymore?

Do customers simply not care? Is everyone ok with Skype randomly not sending messages and trying (poorly) to sync all your messages and randomly deciding that certain actions are always unread on other computers and dropping calls and creating all sorts of other strange and bizarre bugs? Is everyone ok with an antivirus that demands you sign in to a buggy window that keeps losing focus every time you try to type in your password? Is everyone ok with Visual Studio deciding it needs to open a text file and taking 15 seconds to actually start up an entirely new instance even though I already have one running just to display the stupid file?

It seems to me that we're all so obsessed with making cool stuff, we've forgotten how to make stuff that actually works.

Did you know that every single person I know (except for two people) hates flat design? They don't like it. I don't like it. There's a bunch of stuckup, narcissistic designers shoving flat design down everyone's throats and I hate it. The designers don't care. They insist that it's elegant and modern and a bunch of other crap that's all entirely subjective no matter how hard they try to pretend otherwise. Design is about opinions. If I don't like your design, you can't just go and say my opinion is wrong. My opinion isn't wrong, I just don't agree with you. There's a difference.

However, it has become increasingly apparent to me that opinions aren't allowed in programming. I'm not allowed to say that garbage collectors are bad for high performance software. I'm not allowed to say that pure functional programming isn't some kind of magical holy grail that will solve all your problems. I'm not allowed to say that flat design is stupid. I'm definitely not allowed to say that I hate Python, because apparently Python is a religion.

Because of this, I am beginning to wonder if I am simply delusional. Apparently I'm the only human being left on planet earth who really, really doesn't like typing magical bullshit into his linux terminal just to get basic things working instead of having a GUI that wasn't designed by brain-dead monkeys. Apparently, I'm the only one who is entirely willing to pay money for services instead of having awful, ad-infested online versions powered by JavaScript™ and Node.js™ that fall over every week because someone forgot to cycle the drives in a cloud service 5000 miles away. Apparently, no one can fix the audio sample library industry or the fact that most of my VSTi's manage to use 20% of my CPU when they aren't actually doing anything.

Am I simply getting old? Has the software industry left me behind? Does anyone else out there care about these things? Should I throw in the towel and call it quits? Is the future of software development writing terrible monstrosities held together by duct tape? Is this the only way to have a sustainable business?

Is this the world our customers want? Because it sure isn't what I want.

Unfortunately, writing music doesn't pay very well.

## February 11, 2015

### Why Don't You Just Fire Them?

"Nothing is foolproof to a sufficiently talented fool."
— Anonymous
Programmers love to bash things like templates and multiple-inheritance and operator overloading, saying that they are abused too often and must be unilaterally banned, going so far as to design them out of their programming languages thinking this is somehow going to save them.

This makes about as much sense as saying that bank robbers use cars to escape police too much, so we need to ban cars.

Templates, fundamentally, are very useful and perfectly reasonable things to use in most sane contexts. They are used for things like vectors, where you want an int vector and a float vector. However, people point to the horrible monstrosities like Boost and say "Look at all the havoc templates have wrought!" Well, yeah. That's what happens when you abuse something. You can abuse anything, trust me. You simply cannot write a programming language that's idiot proof, because if you make anything idiot proof, someone will just make a better idiot.

Multiple inheritance is usually useful for exactly one thing: taking two distinct object inheritance lines and combining them. If you ever inherit more than two things at once, you probably did something wrong. The problems arise when you start inheriting 8 things and create gigantic inheritance trees with diamonds all over the place. Of course, you can build something just as confusing and unmaintable with single-inheritance (just look at the .net framework), but the point is that the language doesn't have a problem for letting you do this, you have an architectural issue because you're being an idiot.

You do have code reviews, right? You do realize you can just tell programmers to not do this, or simply not use a library clearly written by complete maniacs? Chances are you probably shouldn't have more than one or two template arguments or inherited classes, and you really shouldn't overload the + operator to subtract things. If you do, someone should tell you you're being an idiot in a code review, and if you keep doing it, they should just fire you.

What really bothers me about these constant attacks on various language features or methodologies is that nearly all of them are Slippery Slope fallacies. If we let programmers do this, they'll just make something more and more complicated until nobody can use it anymore! It's the same exact argument used for banning gay marriage! If your response to a programmer abusing a feature is to remove the feature, I really have to ask, why don't you just fire them? The programmer is the problem here. If anyone succeeds in checking awful code into your code base, you either have a systemic failure in your process, or you've hired an idiot that needs to be fired.

Programming languages are toolboxes. I want my array of tools to be powerful and adaptable, not artificially castrated because other programmers can't resist the temptation to build leaning towers of inheritance. It's like forcing all your carpenters to use hammers without claws, or banning swiss army knives because someone was using it wrong. If someone is using a tool wrong, it's because they haven't been trained properly, or they're incompetent. The mindset of banning problematic features in programming languages arises from coders who have to maintain bad code, and who are deluding themselves into thinking that if they got rid of those pesky templates, their lives would be much easier.

Having personally seen a visual basic program that succeeded in being almost impossible to decipher even after several weeks, I can assure you this is not the case, and never will be. The problem is that it's bad code, not that it's written in C++.