Reply to: One less excuse

Alexandru Bolboaca rejects the notion that software hasn’t become a proper, professional engineering discipline because it’s too young:

It recently dawned on me how often I say or hear the words “our industry is young”. There’s truth in these words. . . .

And then he goes on to explain the many reasons why “too young” is a lame excuse for a lack of professionalism in software.

But I think he misses the real issue why software is mostly written by people who lack the discipline he’d like to see: It’s because software is so easy.

It’s really hard to make a bridge that won’t fall down. It’s much harder yet to make one that not only won’t fall down but is also affordable.

Relatively speaking, software is trivially easy and very cheap. Anybody can start making small programs, just like anybody can stack one rock on top of another. But scaling up from small programs to medium-sized ones is something that actually works—anybody can do it. Scaling up from stacking one rock on top of another to building a stone bridge is another thing altogether, because relatively speaking software is so easy.

I suspect that we’re about to get a test of this notion. Pretty soon now I suspect that materials science will give us some new structural materials that are cheap, extremely strong and very easy to work with. I don’t know what it’ll be—carbon fiber, nano-assembled sapphire, resin—but it’ll be cheap enough and strong enough that any yahoo will be able to put up a structure that’s as easy to build as a lean-to but as sturdy as a house.

My prediction: Once that happens, we’ll see a partial collapse of architecture as an engineering discipline. There’ll still be real architects (because the education programs are so well established), but very quickly only large or public buildings will be designed by real architects. If you can throw up a sturdy building in a few hours for a few dollars, people will totally do that, just like they’ll currently write little programs that do something they need done, even if the programs are otherwise crappy in many ways.

Chart of the Week: Electric Takeover in Transportation

From the IMF blog, a great chart showing the rate at which motor vehicles took over from horses early in the 20th century. Putting current motor-vehicle and electric-car use on the same graph makes a pretty good visual case that we might be as little as 15 years from the cross-over point where half the vehicles on the road are electric.

Greater affordability of electric vehicles will likely steer us away from our current sources of energy for transportation, and toward more environmentally friendly technology. And that can happen sooner than you think.

Source: Chart of the Week: Electric Takeover in Transportation | IMF Blog

Handy MTD annexation

The Carle Clinic I use will be in the bus district, hopefully in time for the new bus schedules set to come out sometime in mid-August. Very handy for me.

More than 460 acres of land in southwest Champaign officially will become part of the Champaign-Urbana Mass Transit District, perhaps as soon as today, the transit district’s board decided Wednesday.

Source: MTD board unanimously approves annexing swath of southwest Champaign

What cell phones teach us about the power grid

Back in the day the telephone network was a regulated monopoly. As long as the phone company kept the regulator happy, they were permitted to earn rate of profit on their investment. This resulted in a couple of interesting effects.

First of all, the company was incentivized to invest more in infrastructure: The more they invested, the higher their profit (which was a regulated rate times the size of their investment). This is very different from an unregulated company, where investment is viewed as a cost.

Second, while keeping the regulator happy was always a complex dance, the regulator tended to focus on a few key metrics, one of which was network uptime. This incentivized the phone company to use that large infrastructure investment to produce a network of extreme reliability.

And that network was reliable. In my personal experience with wired phones in that era a wired phone always had service: For two decades as a youth I had literally 100% success picking up a phone and getting a dial tone. Likewise, calls did not drop. Service was rated in nines: 99.999% uptime was 5 nines, 99.9999% uptime was 6 nines.

Of course, that sort of reliability is impossible with cell phones. They move around. Worse yet, they go places where radio signals simply can’t reach.

Cell phone reliability is pretty darned good—let’s call it 98%, as long as you’re not trying to get service in places where nobody else cares if there’s service (the middle of the desert, the middle of the ocean, etc.). But people are not surprised when they find a spot where there’s no service, nor are they surprised if a call drops when elevator doors close or they drive into a tunnel.

This is not to say that cell phone service is bad. My point is simply this: To get the advantages of cell phones, people have accepted a drop in telephone service reliability from six nines down to less than two.

I think this is particularly of interest because I see a potential parallel with the power grid.

The big problem with solar and wind power is that they’re crappy at providing baseline power, for obvious reasons: nighttime, cloudy days, calm days, etc.

If you want a power grid to provide five or six nines of availability, you really need to have enough fossil fuel (or nuclear) generation capacity to provide a large fraction of your total power needs—at least 80%, probably more if you don’t have considerable diversity in your renewable sources (both diverse sources: solar and wind, and geographic diversity: the wind is always blowing somewhere and the sun shines different hours different places).

But just as people learned to get by with less than two nines of phone network reliability, people could certainly learn get by with a less reliable power grid as well.

Thinking of household use, there are certain things that really need fairly reliable power (refrigerator, freezer, furnace), but beyond those few things, we only require a high-availability grid because we’ve set things up with the expectation that it would be there.

Just two or three modest changes to the way we use power could easily accommodate a less-reliable grid.

The easiest one would be for each household to have a guaranteed level of power—enough to keep your food fresh, your pipes unfrozen, and a couple of lights turned on—and then make additional power available on an as-available basis. Alternatively, you could go with a market-based measure where power was cheap when it was plentiful and expensive when it was scarce. A third option would be to distribute the resiliency, with each household providing its own backup power storage or generation capability.

My point here is not to solve the issues for a smart grid, but just to make this point: For a big enough payoff—like the payoff of a internet-connected supercomputer that you can carry in your pocket—we would accept a considerable downgrade in reliability from our power grid.

The payoffs from renewable energy arguably are that big. (In particular, not rendering the planet uninhabitable for humans. But that’s a payoff that’s uncertain and diffuse, with the gains—especially the early gains—going to people other than the ones who need to make the sacrifices.) But there are payoffs to everybody: less particulates in the air, fewer pipeline and tanker spills, fewer truck and rail accidents hauling coal and oil through towns and cities, fewer worker deaths in the coal-mining and oil-drilling industries. And then there are the cost savings: Renewable power has the potential to be very cheap and very reliable in the out-years, once the infrastructure has earned out its initial capital costs.

It might well be worth getting past the idea that the power grid should provide near-perfect reliability, given the payoffs involved in accepting a bit less.

The Federal Reserve and the “gig” economy

The “gig” economy: all the sorts of work arrangements where you’re not a permanent employee and can’t expect that work one day implies that you’ll have work the next day—freelancing, contracting, temp work, casual labor, and most recently, software-mediated contract work like Uber driver.

These sorts of work have been growing as a fraction of all work. In fact, according to the Bureau of Labor Statistics, in the last ten years contingent workers have gone from being 10% of the workforce to being 16%. In fact,

all of the net growth in aggregate employment in the decade leading up to 2015 can be accounted for by contingent work arrangements, which means there has been no net employment growth in traditional work arrangements.

Source: FRB: Brainard, The “Gig” Economy: Implications of the Growth of Contingent Work

This matters to everyone with an interest in the U.S. economy, but it matters particularly to the Federal Reserve, which is charged by Congress to:

promote effectively the goals of maximum employment, stable prices and moderate long-term interest rates.

Source: The Federal Reserve’s Dual Mandate

So this raises the question: Does strong growth in the number of freelancers, on-call temps, and Uber drivers mean that we’re getting closer to maximum employment? Or, that we’re getting further away?