The Software Engineering Game

Players: 1 or more
Goal: Score the most points over 10 rounds
Summary: Each round represents a year, and each die in a player's dice pool represents some some software capability that either generates annual recurring revenue, or a support burden.
 
Setup:
  • Gather a big pool of D&D dice, ranging from d4 to d20, or use the Google dice roller.
  • Get a piece of paper to keep score. Write each player's name across the top, and the rounds (1-10) along the left side
  • Pick a random player to start
Game Play:
  1. The current player decides between one of two actions:
    1. Add a new d4 to their "revenue" dice pool.
    2. Upgrade an existing die to the next level (d4 to d6, d6 to d8, etc....).
  2. Players take turns making this decision, passing to the left, until all players have chosen an action.
  3. Once all players have chosen, they roll all the dice in their revenue dice pool
    • For each die that is not a one, the player scores a point. This represents annual revenue. Add this to a running total for the player.
    • Each die that is a one moves into a separate "legacy" pool. A player with dice in their legacy pool cannot add new dice, or upgrade dice in their revenue pool on their turn. Instead they can only move one die from their legacy pool into their revenue pool.
    • If _all_ the dice in a player's revenue pool roll a one, this represents a catastrophic failure. The player's score total is reset to zero.
  4. The starting player position shifts one to the left.
  5. IF ROUNDS < 10 GOTO #1
  6. After finishing the 10th round, add up the scores for all the players

 

I've play-tested this exactly three times, using three different strategies:

Strategy #1 - Moar Dice!
  1. Add d4, +1 point
  2. Add d4, +1 point, 1 legacy die
  3. Recovered legacy die, +2 points
  4. Add d4 = Catastrophic failure! Reset to zero
  5. Recovered legacy die, +1 point
  6. Recovered legacy die, +2 points
  7. Recovered legacy die, +3 points
  8. Add d4, +3 points, 1 legacy die
  9. Recovered legacy die, +3 points, 1 legacy die
  10. Recovered legacy die, +2 points, 2 legacy die
Total: 14 points
 
Strategy #2 Minimum Risk
  1. Add d4, +1 point
  2. Upgrade to d6, +1 point
  3. Upgrade to d8, +1 point
  4. Upgrade to d12, +1 point
  5. Upgrade to d20, +1 point
  6. Add d4, +2 points
  7. Upgrade to d6, +1 point, 1 legacy die
  8. Recovered legacy die, +2 points
  9. Upgrade to d8, +2 points
  10. Upgrade to d12, +2 points
Total: 14 points
 
Strategy #3, Balanced d8 Risk
  1. Add d4, +1 point
  2. Upgrade to d6, +1 point
  3. Upgrade to d8, +1 point
  4. Add d4, +2 points
  5. Upgrade to d6, +2 points
  6. Upgrade to d8, +1 point, 1 legacy die
  7. Recovered legacy die, +2 points
  8. Add d4, +3 points
  9. Upgrade to d6, +3 points
  10. Upgrade to d8, +3 points
Total: 19 points

All Technical Debt is Credit Card Debt

When you find yourself in the position of trying to persuade someone who doesn't share your life experiences, you may reach for a common metaphor. This metaphor becomes something that you can use to create a frame of reference...a set of facts that are independent from the actual facts because the actual facts are too complex or obscure to communicate.

In programming, perhaps the best example of this is technical debt. This metaphor has been misunderstood, misappropriated, and generally misused almost since it's inception. Why? Because it's so very useful! People who pay for software projects often have access to money, and people who have access to money often understand debt. So "technical debt" becomes a common frame of reference between holders-of-cash and writers-of-code, even if the participants in the conversation don't have a clear definition of what "technical debt" means. That lack of clarity leads naturally to creatively extending the metaphor. If you use this metaphor often, you may one day find yourself arguing with someone about whether a particular bit of crufty code is "long term debt" or "short term debt" or "high interest debt" or "low interest debt". If you work in finance, you may even push the metaphor into more esoteric instruments like convertible bonds or interest rate swaps. Maybe you've even created clever systems for tracking debt or measuring debt, adding in additional metaphors like "interest rate" and "payment terms".

When this happens to me, I start pushing back. That's because, after using this metaphor for years, I've found that all these creative extensions of the metaphor only serve to obscure the issue at hand, which is: We wrote some code that doesn't fit our current understanding of the problem, and now we should change it. Of course, that sort of argument doesn't usually play well with others, so instead of decrying the extension of the metaphor, I simply pick the best one and go with it, which is:

All technical debt is credit card debt.

When you use a credit card to buy something that you can't afford, you generally tell yourself that you will pay it off "soon". Every once in a while, you make a conscious decision to buy something with a credit card and not pay it off for a while. And every once in a great while, this turns out to be a good idea in retrospect. But usually, when you use high-interest credit to buy something you can't afford, you're making a promise on behalf of your future self to go without something else that you might otherwise want to buy (with interest), which reduces the total area under the things-you-want curve.

When you buy something that you can afford with a credit card, that's a completely different thing. You're just playing a little game with short term cash flow. Your credit card company gives you a grace period of somewhere between 30 and 60 days between when you buy a thing and actually have to pony up the cash. Additionally, carrying around a little plastic card is a lot easier (and safer) than carrying around all that cash. So you buy things with your credit card, and pay the bill in full every month.

On healthy software development projects, some amount of "grace period" debt is expected. It appears as a side effect of building software. Small experiments create small amounts of debt which is refactored when the experiment is over. Decisions are deferred for a few minutes, hours, or days...until a design starts to take shape. Code is left ugly or slow until it's proven to work with tests, and then cleaned up once the scope and scale of problem is well understood. Effective programmers have a good internal sense of when to shift back and forth between getting things done and cleaning up small messes.

If the team cares about the quality of their work, you won't ever have to tell them to pay off this "grace period" debt...indeed, you probably won't even see it, let alone track it or direct them to clean it up. So it won't ever be tracked on a card wall, discussed in planning meetings, or documented on a wiki. However, since only the programmers have visibility to what debt has actually been accrued, you have to trust them to decide when it's appropriate to pay off debt, and when it's appropriate to work on new functionality. Trying to shift responsibility for making that decision away from the folks who have their hands in code every day changes the conversation from "pay it off in full every month" to "how many months of interest should we pay?" Most of the time, this is a bad idea.

Just like a credit card, if you don't pay off debt during the grace period, it can quickly spiral out of control. The more debt there is, the less money (time) there is to pay it off due to the interest payments. Just like the poor souls whose financial lives have been crushed by credit card debt, eventually all your income goes to interest payments and there's nothing left over at the end of the month. This will often manifest as a "critical mess", where making the changes to fix one bug creates two more bugs. Your programmers will then start to fear making changes to the code. They may suggest a rewrite. Or maybe they'll just quit. In either case, once this happens the code becomes effectively impossible to change, and your accumulation of technical debts means you'll have to declare technical bankruptcy. 


Two Player (Cooperative) Rules for Dixit

Dixit is a favorite game in my house, but we can’t always find the minimum three players to play. So my daughter came up with a two player cooperative variant that I think is even more fun than the original. It uses the standard pieces and cards from the original Dixit game, and takes about 20 minutes to play. Here’s how it works:

Instead of competing to score the most points, the players work together to score 4 points (or 5 for a harder difficulty) before running out of cards. The players score points by having one player give “clue” cards to the other player, to get them to guess which of the 6 cards on the board is the “secret” card.

Players alternate taking turns as the active player. The active player deals out 6 cards into the 6 card slots on the Dixit board. They then choose a card slot tile with a number on it to represent the “secret” card they want the other player to pick. They place this tile face down on the table.

Once the active player has selected the secret card. They may draw up to 7 cards from the deck, and select as many of these cards as they like to represent “clue” cards. These cards are given to the other player, and the rest are discarded. The active player is not allowed to give any other verbal or non-verbal hints or clues about what the secret card might be. The active player may also choose to give no clue cards, if giving any of the available cards would simply misdirect the other player.

The other player then uses the clue cards to guess the secret card. If they guess correctly, the players score a point. If they have enough points, they win! Otherwise, all the cards played on this turn are discarded, and the other player becomes the active player. Use one of the player score tokens to keep score on the point track on the Dixit board.

If the players do not have enough cards to play the 6 initial cards plus at least one clue card at the start of a turn, the game is over immediately. If the active player runs out of cards while drawing clue cards, the game ends after the current turn.

I actually like this variation much more than the original Dixit rules. I really like cooperative games, and this plays faster and simpler than the original, while still keeping the fuzzy-pattern matching that makes the original game fun.


Do Bigger Teams Use Fewer Technologies?

I speculated on Twitter that adding members to a software team reduces the number of new technologies that team can use. This is because unless all members are comfortable with a new technology, adding it can cause a Conway's-law-like split. For example, a system written in Java and C++ might split into two systems managed by two different teams, each exclusively using one language. The resulting teams would each have a less diverse technology stack, further contributing to the pattern. This speculation is based on my personal observations that smaller software teams tend to be more diverse in the primary languages and tools they use, while larger teams tend to be less so (i.e. "We use anything that works" vs "We're a Java shop"). 

To examine this a little closer, I first need to explain what I mean by a "team". A team is a group of people who succeed or fail together. If you can fail while I succeed, then we're not on the same team. What success means for a software team varies, but it's harder for members of a team to participate in the success of that team if they can't contribute equally. A simple way to achieve this is to have every person do/know everything. As I examined in Powers of Two, a pair of programmers can do this quite easily.

As a team grows, it's less likely that every person on the team will be able to use all technologies equally. A common reaction to this is to limit the number of technologies used, relying only on the ones that everyone on the team can use effectively. As more people are added, the only common technology becomes a single language or platform, and the primary prerequisite for joining the team is that you know this technology well.

Adding a new technology into this mix can cause the team to fracture along the technology boundary. If some programmers prefer working in one language/editor/platform or another, or have a preference for working on the type of problem that technology was adopted to solve, the goals of the individuals on the team can begin to diverge. Unless the team takes steps to resolve these conflicts, they will eventually lose a common definition of success.

As one sub-group starts to view success differently than the others, the pressure to split increases. The wider organization can respond to this by changing the org structure, by rolling back the addition of new technology, or by forcing the individuals to continue working as a single unit with divergent goals. In two of these three cases, you wind up with teams with a technology monoculture...the same result as if you had never introduced the new technology at all.

This idea is speculative, but I do think it's something you could measure empirically. If you could find a way to tease out the actual teams, Github organization data might have everything you need. You could see how many different languages each team regularly uses, and try to compare that to the size of the team to see if there's a pattern. I'll leave that exercise to the Github data mining experts.


Powers of Two

There are a few "best practices" that I've been able to do without, that I previously thought were absolutely essential. I would think that's a function of a few different factors, but I'm curious about one in particular.

I've worked on large and small teams before, but I'm currently working closely with just one other developer. I thought I'd try to list all the things that we don't have to do anymore, to see if there's any sort of process/value inflection point when you have exactly two developers.
 
For context, let me explain what we've been doing. It's not revolutionary, or even particularly interesting. If you squint it looks like XP.
 
We sit next to our users. It gets loud sometimes, but it's the best way to to stay in touch and understand what's going on.
 
We pair for about 6 hours a day, every day. Everything that's on the critical path is worked on in a pair. Always. Our goal is always to get the thing we're working on to production as fast as we responsibly can, and the best way I've found to that is with a pair.
 
We practice TDD. Our tests run fast (usually 1 second or less, for the whole suite) and we run them automatically on every change as we type. We generally test everything like this, except shell scripts, because we've never found a testing approach for scripts that we liked.
 
We refactor absolutely mercilessly. Every line of code has to have a purpose that relates directly back to value to the company. If you want to know what it is you can generally comment it out and see which test (exactly one test) fails. We don't go back and change things for the sake of changing them, though. Refactoring is never a standalone task, it's always done as part of adding new functionality. Our customers aren't aware if/when we refactor and they don't care, because it never impedes delivery.
 
We deploy first, and often. Step one in starting a new project is usually to deploy it. I find that figuring out how you're going to do that shapes the rest of the decisions you'll make. And every time we've made the system better we go to production, even if it's just one line of code. We have a test environment that's a reasonable mirror of our prod environment (including data) and we generally deploy there first.
 
Given all that, here's what we haven't been doing:
 
No formal backlog. We have three states for new features. Nownext, and probably never. Whatever we're working on now is the most valuable thing we can think of. Whatever's next is the next most valuable thing. When we pull new work, we ask "What's next?" and discuss. If someone comes to us with an idea, we ask "Is this more valuable that what we were planning to do next?" If not, it's usually forgotten, because by the time we finish that there's something else that's newer and better. But if it comes up again, maybe it'll make the cut.
 
No project managers/analysts. Our mentality on delivering software is that it's like running across a lake. If you keep moving fast, you'll keep moving. We assume that the value of our features are power-law distributed. There are a couple of things that really matter a lot (now and next), and everything else probably doesn't. We understand a lot about what is valuable to the company, and so the responsibility for finding the right tech<=>business fit best rests with us.
 
No estimate(s). We have one estimate: "That's too big" Other than that, we just get started and deliver incrementally. If something takes longer than a few days to deliver an increment, we regroup and make sure we're doing it right. We've only had a couple of instances where we needed to do something strategically that couldn't be broken up and took more than a few weeks.
 
No separate ops team. I get in a little earlier in the day and make sure nothing broke overnight. My coworker stays a little later, and tends to handle stuff that must be done after hours. We split overnight tasks as they come up. Anything that happens during the day, we both handle, or we split the pair temporarily and one person keeps coding.
 
No defect tracking. We fix bugs immediately. They're always the first priority, usually interrupting whatever we're doing. Or if a bug is not worth fixing, we change the alerting to reflect that. We have a pretty good monitoring system so our alerts are generally actionable and trustworthy. If you get an email there's a good chance you need to do something about it (fix it or silence it), and that happens right away.
 
No slow tests. All of our tests are fast tests. They run in a few milliseconds each and they generally test only a few lines of code at once. We try to avoid overlapping code with lots of different tests. It's a smell that you have too many branches in your code, and it makes refactoring difficult.
 
No integration tests. We use our test environment to explore the software and look for fast tests that we missed. We're firmly convinced this is something that should not be automated in any way....that's what the fast tests are for. If we have concerns about integration points we generally build those checks directly into the software and make it fail fast on deployment.
 
No CI/Build server. The master branch is both dev and production. We also use git as our deployment system (the old Heroku style), and so you're prevented from deploying without integrating first...which is rarely an issue anyway because we're always pairing.
 
No code reviews. Since we're pairing all the time, we both know everything there is to know about the code.
 
No formal documentation. Again, we have pairing, and tests, and well written code that we can both can read. We generally fully automate ops tasks, which serves as its own form of documentation. And as long as you we search through email and chat to fill in the rest, it hasn't been an issue.
 
Obviously, a lot of this works because of the context that we're in. But I can't help but wonder if there something more to it than just the context? Does having a team of two in an otherwise large organization let us skip a lot of otherwise necessary practices, or does it all just round down to "smaller teams are more efficient?"

Testing with FIRE

Updated April 25th, 2023
 
For years now, I've held the belief that effective automated test suites have four essential attributes. These attributes have been referenced by other authors, and were the subject of a talk I gave at the Agile 2009 conference. But I was shocked to discover (that is, remember) that the only place they are formally documented is in my Continuous Testing book [Pragmatic Bookshelf, 2011], which is now out of date, out of print, and totally inaccessible to most of the Internet. And so, I'm capturing these four attributes here. I intend to treat this as a living document, updating it as my understanding of these attributes evolves.

Continue reading "Testing with FIRE" »


Serverless Web Apps: "Client-Service" Architecture Explained

My earlier piece on serverless auth seems to have gotten some attention from the Internet. 

In that post, I made a comparison to client-server architecture. I think that comparison is fair, but after discussing it with people I have a better way to explain it. If I abstract away the individual services from the diagram I used earlier, you can see it more plainly.

  ClientServiceArchitecture

You could call this architecture client-service, as opposed to client-server. It's based on a thick client that directly accesses web services, all sharing the same authentication credentials (provided by yet another service), which are used to authorize access to the services in a fine-grained way. This approach is made possible by the serverless technologies created by Amazon and other vendors.

While this idea wasn't exactly controversial, I did get a few questions from people who were skeptical that this was truly revolutionary (or even effective). So, in the rest of this post, I'd like to address a couple of the questions that were asked. 

Isn't putting all your business logic in JavaScript insecure?

The first thing to realize about this is that all clients are insecure. Just because you make a native app doesn't mean your application is immune to reverse engineering. If the security of your app depends on the fact that it's slightly more difficult for people to read your code on some platforms, you have a serious problem. So no matter what, if you're depending on client side logic to control what users can and can't do...well, I don't want to say you're doing it wrong, but I wouldn't want to use your app for anything important.

On the other hand, if you're worried about people reverse engineering your application's secret sauce, then by all means, don't put that part of your app's logic in the client. Unless you app is mostly made of sauce, this won't really be a problem. As I said in my earlier post, if some of your business logic needs to run on the server, microservices can be a simple, scalable way to do that in a serverless web app. Just keep in mind that you don't have to do this with everything.

Isn't this just OAuth and a thick client?

While both Auth0 and Cognito support OAuth identity providers, the capabilities of these services are beyond what you can do with just OAuth and a thick client. Many AWS services, including DynamoDB and S3, allow you to create fine-grained access control policies that use Cognito credentials to control what resources can be accessed by an authenticated user. For example, in DynamoDB, you can create a templated IAM policy that will limit access to individual records in a table based on the ID authenticated by Cognito. This means you can keep user data separate and secure while still keeping most of your business logic in the browser. The service itself can enforce the security constraints, and you don't have to use business logic to control who sees what.

Of course, using policy documents to control access isn't new. And federated authentication isn't new. And creating thick clients certainly isn't new. But bringing all these things together has created something new. As I show in my book, using this approach, you can create web applications at a fraction of the cost of a traditional load-balanced app server based design.


Ben Rady is the author of "Serverless Single Page Apps", published by the Pragmatic Bookshelf. If you've enjoyed this post and would like to read more, you should follow him on Twitter.

The Real Revolution of Serverless is Auth, Not Microservices

Serverless computing has been getting a lot of attention lately. There are frameworks, a conference, some notable blog posts, and a few books (including mine). I'm really glad to see this happening. I think serverless web apps can be incredibly powerful. As an industry we have a lot of room to grow, especially when it comes to scalability and reliability.

One thing I'm concerned about, however, is that some people seem to be conflating serverless architectures with microservice architectures, often based on AWS Lambda. While there is some overlap, these are not at all the same thing. When it comes to serverless web apps, the true innovation of serverless computing isn't microservices, it's auth.

Continue reading "The Real Revolution of Serverless is Auth, Not Microservices" »


Serverless Single Page Apps, In Print at The Pragmatic Bookshelf

I've been working on a book that explains the style of single page app that I've been building for the last few years. Up until very recently, I couldn't find a way to use this style for public-facing apps, because the infrastructure required wasn't generally available. Now, thanks to AWS, it is...making "Serverless" single page apps accessible to billions of desktop, tablet, and mobile devices around the world. This book is the synthesis of years of work in many different areas, and I couldn't be happier to have it available in print (and PDF, of course).

It's also (currently) the #1 new release in Mobile App Development & Programming on Amazon.com.

Screen Shot 2016-06-29 at 9.15.36 PM

So that's pretty great. There are other great books on this topic, and I'm happy to see so many people interested in these ideas.


Stop Calling It Theft: Thoughts on TheDAO

Like many people involved in Ethereum, my attention has been thoroughly captured by the recent events surrounding TheDAO. As an Ethereum miner, I have a little stake in this game. The reentrancy vulnerability found in TheDAO smart contract has resulted in a single actor draining the ether contributed to TheDAO smart contract (3.6 million of 11.5 million, so far, as the process in ongoing).

Since the mechanism being used here is a child DAO, the funds won't be available for transfer out of that account for another 27 days. In the meantime, a soft fork has been proposed that would block that transfer, allowing for the funds to be recovered and potentially redistributed to DAO token holders. After considering the arguments on both sides of this issue, and thinking about the role of Ethereum in a future economy full of digital assets, I've come to the conclusion that I am strongly opposed to this idea.

If Ethereum is to become what it purports to be, even considering this fork is a toxic solution to the problem. While I could go into discussions about the rule of law, or decentralized political systems, I think the best way to explain my position is an idea that most gamers will find familiar: If the game lets you do it, then it's not cheating.

The Ethereum foundation should take steps to prevent this kind of problem in the future. Those steps could even include a hard fork, or changes in the Ethereum roadmap. Perhaps making it so that a single contract can't hold such a large percentage of ether would be a good idea. I'm sure that in the coming months, people will have learned many lessons from this experience...lessons that can be applied to make the network stronger. But it wasn't the Ethereum network that was attacked here.

Although it will be very painful outcome for many people, the Ethereum network worked exactly as intended. TheDAO contract writers tried to play the game and they lost. It turns out that TheDAO was actually just a $160m security audit bounty. Instead of calling the new owner of TheDAO's ether a thief, we should be congratulating them on a game well played. Changing the rules in the middle of the game sets the very dangerous precedent of saying that the behavior of the network is not determined by code, nor even by laws, but simply by the majority consensus of it's participants. Any action taken on the Ethereum network going forward may be retroactively overridden by what is essentially mob rule.

Ethereum has the potential to move us into a new age of human organization. From tyranny and monarchy, to the rule of law, and then to the rule of code. Instead of killing all the lawyers, we can just make their work partially obsolete. But if we make this choice now, of retroactive rule by popular opinion, hope of reaching that future with Ethereum will be critically undermined. While we may be able to recover the ether, the trust we lose will come at a far greater cost.