When my wife and I moved from Philadelphia to San Francisco in 2010, we brought our espresso machine with us.
Back in Philly, we’d had a modest kitchen with just enough counter space for the machine. On lazy weekend mornings, I’d often turn it on and prepare us each a latte drink. It was a nice little ritual.
So we brought it to San Francisco with us. But our first apartment, a little studio we rented from a friend of a friend, didn’t have the room for it. So the machine went into storage.
Then in 2011, we moved to our current apartment in the Mission. The espresso machine is back; but we don’t have quite the counter space that we did in Philadelphia, so it’s sitting on a little cart underneath our microwave, unplugged.
This isn’t terribly inconvenient. To use it, I only need to pick it up and set it somewhere—say, on our table—then put it back when I’m finished. Still, the fact remains: I haven’t used it once1 since moving here.
It’s natural to think of being smart as an asset. This is obvious in many ways, so I don’t feel I need to enumerate them. But there are also ways that it can be a liability; and since this is the contrarian view, I naturally want to talk about it1.
Before I start, though, a note about the word “smart”: it can mean many things. What I am specifically referring to now is what I will call raw brain power: the capacity of a person’s mind to think quickly, grasp tricky concepts, store a lot of information at once, and so on. If the mind were a computer, in other words, I’d be talking about hardware (CPU, memory, etc.) as opposed to software.
The software of a computer system makes use of the hardware. It isn’t the other way around. Powerful hardware on its own is useless. For the purpose of this argument I propose that we think of being “smart”–i.e., of having a lot of brain power—as analogous to having a computer with powerful hardware. In contrast, having good instincts, solid judgment...
In a strongly-worded blog post back in 2010, David MacIver asserted that there is a fundamental flaw in DataMapper, an ORM library for Ruby. The core of his complaint is1 that DataMapper’s default API for saving records hides errors, making it difficult to diagnose what went wrong when something fails. This in turn increases the likelihood of defects going unnoticed during development and testing, resulting in buggier software.
Borrowing from MacIver’s post2, the below is a boilerplate example of how one might attempt to save a record and report any failures using DataMapper:
my_account=Account.new(:name=>"Jose")ifmy_account.save# my_account is valid and has been savedelsemy_account.errors.eachdo|e|putseendend
The above can be pretty annoying to anyone who expects conciseness from an API. Most developers don’t like the idea of having to write several lines of code just to save a record to a database.
TL;DR: Check out my new gem, SafeYAML. It lets you parse YAML without exposing your app to security exploits via arbitrary object deserialization.
There was quite a stir in the Rails community recently about a serious security vulnerability in Rails. To be more specific: every version of Rails. We found out about this right away at Cardpool, in part because Cardpool is a YC company and Paul Graham forwarded an e-mail from Thomas Ptacek to all YC alums warning of the vulnerability pretty much as soon as it was discovered.
Without getting too caught up in the weeds, I will just say the vulnerability was ultimately a consequence of the fact that Ruby’s YAML library by default permits the deserialization of arbitrary Ruby objects. This is a problem for Rails—as well as many other Ruby frameworks, to be fair—because, until patches were released to address this problem, any Rails app could be “tricked” into parsing malicious YAML by basically anybody, without any special credentials. The...
I was raised in a devout Christian family, which resulted in a fair amount of inner conflict and soul-searching throughout my academic life, particularly with respect to my ninth-grade education on evolution1. This in turn ultimately led me to read a book called Darwin’s Black Box by Michael Behe, which argues in favor of intelligent design2 on the basis of a concept called irreducible complexity. It is actually a pretty reasonable argument, in my opinion—though I’m admittedly no expert on the subject—at least in that its premise seems plausible. To summarize in one sentence: Behe argues that there are systems in present-day organisms consisting of interacting parts, each of which on its own would provide no reproductive advantage to an individual and so cannot be explained purely by Darwinian natural selection. Only taken as a whole do these systems provide reproductive advantages; and so some other process must have generated them (where intelligent design enters the picture).
More than one of my high school English teachers taught us that when you’re writing a paper, you should start by making an outline of your high-level points. This way, they told us, you would have a “skeleton” paper already written, which you could then “flesh out” by filling in appropriate details here and there.
I never much internalized this process of starting off with an outline. I wish I had.
To design or assemble
My first project at ThoughtWorks was in Dallas, TX. During a car ride back to the office after lunch one day, I was having a conversation with Billy, one of the client company’s developers; and he mentioned that he had recently been to a Google conference to learn about Google Web Toolkit (one of the technologies we were using on the project), among other things. I can’t recall everything we talked about, but something that Billy said during that conversation has stuck with me ever since:
Companies like Google, Microsoft, Apple—they are...
Recently my friend Chuck reminded me of a conversation he and I had ages ago about a company called Steorn. This is a company that publicly claimed, back in 2007, to have developed an overunity technology. Chuck chastised me for having persuaded him to take the company seriously; to this day, despite their refusal to back down, they have still not convincingly broken the second law of thermodynamics.
Most of my acquaintances with a modest amount of scientific knowledge, of course, dismissed Steorn from the very start. What the company claims to do violates a known law of physics, they argued; therefore it is impossible; therefore they are either lying or confused. Personally, I never did and probably never will fully sympathize with this attitude. While I agree that Steorn probably do not have what they have claimed (and I certainly have no intention of arguing with the laws of thermodynamics!), I disagree with the premise that we can be so sure of things like this that we are justified...
I read an article in the New York Times recently entitled Has Apple Peaked? and found myself nodding my head to a lot of the author’s points. The basic premise of the article was this: maybe Apple has peaked, and maybe it isn’t because Steve Jobs has passed away but rather because, as a company on top of the world, they now have everything to lose and can no longer take big risks.
I think there’s something to this, and I’d add another source of inertia for consideration: hubris (big surprise to those of you familiar with my general dislike for Apple, I’m sure!). At Apple’s scale, given the massive success they’ve enjoyed over the past several years, I have no doubt that the company’s sense of self-importance is extraordinarily high. Which is obviously justified to a significant degree. But one common observation I have about human nature—and I am increasingly convinced that it applies to businesses the same way it applies to individuals—is that it is very easy to pat yourself on the...
As I’m sure plenty of you already know (the title of this blog is a bit of a giveaway), I was a philosophy major in college. Which means I was not a C.S. major. But that’s not the full extent of it: I didn’t minor in C.S., either; I actually took absolutely no computer sciences courses at all in college.
I did recently receive an M.S. in software engineering from Carnegie Mellon; but the courses in that program were higher-level in nature: software architecture, process management, software metrics, entrepreneurship—that sort of thing. And so I’ve still never really had an academic foundation for a lot of the more theoretical stuff that those with bachelor’s degrees in computer science have.
To clarify: I do know a decent amount of C.S. stuff in practice, because:
I worked for two years at an algorithmic trading company, where performance was a key concern (use of optimal data structures was critical) and the software was highly concurrent (so I got plenty of hands-on experience...
Today I felt like writing something more low-level than I’ve written in a while… and since C# is about the lowest I go (yeah, pretty sad—I haven’t really earned my beard yet), that will be my language of choice for this post.
In a language like Java or C# (or even Ruby), this would be totally easy...