You’re building a forest. You have big plans for this forest; it’s going to be big, and green, and
beautiful. A swath of trees will sweep over the rolling hills here, there’ll be a tranquil little
clearing there. It’s breathtaking in your head. It will be glorious.
Naturally for a forest, you need trees. And you want beautiful, elegant trees. So you recruit the
top treebuilders from the best treebuilding academies. You’re going for the top talent.
These guys get to work, and right away you’re pleased. They know trees; that much is obvious. Left
and right they’re constructing trees of every kind: from simple, lovely little cherry and birch
trees up to towering, majestic redwoods. Every one is gorgeous, and you couldn’t be happier.
Months go by, and you survey your project from the top of a hill. Over time you begin to grow
concerned. Whenever you visit the treebuilders below, you can see that they are busy doing very
impressive work. But when you return to the hilltop, the developing...
Here’s an observation I find myself making pretty frequently. While it’s probably obvious to some, I think a lot of developers don’t think about it. So my intention is simply to plant the idea in your head, in case it wasn’t already there.
Two kinds of developersment
There isn’t just one kind of developer. Broadly speaking, there are at least two kinds: library developers and application developers.1 That isn’t to say that each of us falls 100% cleanly into one category or the other. I consider myself both. But when I’m wearing my “library developer” hat, my work looks very different from when I’m wearing my “app developer” hat.
It’s tempting to view us all the same: we all tackle the same problems, so the same rules and principles apply to all of us the same. But that simply isn’t reality. A lot of heated debates flare up on the internet between developers about the “right” way to build software, or what “matters” and what doesn’t; and what I think is often really happening is that...
I find myself growing more and more passionate about the idea that we’ve barely scratched the surface when it comes to building intelligent tools that can and should amplify our capacity as human beings. If a person can figure something out, then it should be possible for a machine to figure it out. It just takes a human brain with the insight and determination to program that machine with the right rules.
The itch of asm.js
I think the seed was planted when I read Vyacheslav Egorov’s thoughts on asm.js. Something had rubbed me the wrong way about asm.js1 for a while but I couldn’t quite put my finger on it. If I had to pick an excerpt from Egorov’s post that best sums it up for me, I suppose it would be this:
This might sound weird and hyperbolic, but for me pattern matching a statically typed subset
I feel a related itch whenever the topic of software performance...
I used to be a bit of a perfectionist. I suppose I still am in many respects1. But an interesting trend I’ve noticed about myself is that the more expertise I develop in an area, the less of a perfectionist I become.
I think this is a necessary adaptation. When you’re new to a field, you tend not to notice things that are wrong. You don’t even know what “wrong” is, a lot of the time. But as you develop a clearer, deeper understanding of a subject, your ability to see flaws is sharpened. Gradually, everything seems wrong for one reason or another.
In this predicament—having a heightened perception of flaws—a perfectionist has two choices: try to fix them, or look past them. To fix them is sort of like taking the high road, in my mind; and it can be a noble choice. Often, the only ones who can fix something are expert perfectionists, with the knowledge to understand the job and the determination to actually do it.
Personally, I often choose the low road. And I believe that’s the right...
Here’s a mistake people often make: thinking that the absence of any obvious disproof is the same thing as proof.
I took a class in college called Gender and Language. It was sort of a sociology-meets-linguistics course. One of the first points our professor made was a theory about the concept of feminism and why it is, compared to what you might expect, a relatively controversial issue. (I think most of us have met people with a negative reaction to the word “feminist”; or anyway, I certainly have.)
The professor asserted that this perception issue could be explained by the word feminist itself: “All other -ist and -ism words are negative in their connotations,” she told us. “Racist, sexist, anarchist, fascism, communism, nihilism, antisemitism”—and she went on for a bit. Then she briefly challenged us to think of any exceptions; and when none sprang to anyone’s mind right away, we moved on.
Of course, later that day some very obvious exceptions starting popping into my head: idealist
I recently had a brief1 online exchange with Matt Copeland, a former coworker at ThoughtWorks, about the website shouldiuseacarousel.com. It’s a fun little site presenting several bullet points against the use of carousels (rotating banners) in website UIs2.
I’m no designer or UX expert. Over the years I’ve noticed that I do seem to take a greater interest in frontend-y stuff than most other developers I work with; but that’s a far cry from someone with expertise in user experience. Still, when Matt first posted a link to the carousel-bashing site I found myself responding skeptically. Of course, maybe that’s because I respond skeptically to pretty much everything that exists in this world.
Anyway, this shall be my attempt to explain my skepticism, concisely if I can (but knowing me, I probably can’t).
Judgments require context
Consider this point:
Where is it? To answer that question we need some frame of context, or reference. For example, we could define...
My last course as a grad student at CMU was Entrepreneurship for Software Engineers, in which teams of students basically worked on startup ideas for one semester, sharing their progress and collecting feedback during each class session and presenting to a small group of VC reps at the end of the term. I worked on a silly little app called InstaPie–which I haven’t touched in months (mainly because I don’t have an Apple computer anymore!), though I do plan to pick it back up soon, I swear—and I remember during my presentation to the VCs, one of them asked, “What problem are you trying to solve here?”
This wasn’t my first exposure to the question, of course. We’d been taught to always keep this question in our crosshairs, to not lose sight of the goal. Identifying the problem and proposing a solution is an important part of any elevator pitch, I hear. If you can’t answer that question, then you’ve lost your way somehow. Start retracing your steps until you get back to the place where...
Most of us in software (and probably in other fields) know about hero culture. It’s a concept everybody loves to hate. The term refers to an environment where individuals work in isolation and thrive on receiving sole credit for their work. This is generally perceived as leading to inflated egos and poor cooperation among developers. It’s the same phenomenon you see in professional sports, when star athletes are sometimes accused of not being good “team players” and getting greedy with the ball, wanting to be the center of attention.
So there are social reasons to dislike hero culture. There are plenty of practical reasons, too. One involves a principle known as the bus factor: the more you rely on a hero, the more vulnerable you are in the event of losing him or her—e.g., if he or she is hit by a bus, or leaves for another company.
But you guys know me. If everybody else is going left, I’m going to go right (a highly predictable trait that my friend Sameer frequently reminds me of...
Looking back on some of my recent posts on this blog, I’m a bit annoyed at myself for being too hand-wavy and not saying anything all that original.
I’d like to make an effort, at least in my next few posts, to get more concrete and challenge some of the conventions I’ve observed in the software world. I’ll start with an idea that I think is not all that radical, though it would mark a sigificant departure for most teams I’ve worked on.
How we think about priority
The way most teams prioritize work seems totally sensible on the surface. Essentially, tasks are assigned some priority ranking such as “high”, “medium” or “low”; and then the highest-ranked tasks actually get assigned to people. In a perfect world this would mean that the most important things always get done, and then when there’s a surplus of time a team can “catch up” on issues that aren’t quite as urgent. In practice I think a different reality tends to emerge.
On projects I’ve been a part of, inevitably it turns...