Two Player (Cooperative) Rules for Dixit

Dixit is a favorite game in my house, but we can’t always find the minimum three players to play. So my daughter came up with a two player cooperative variant that I think is even more fun than the original. It uses the standard pieces and cards from the original Dixit game, and takes about 20 minutes to play. Here’s how it works:

Instead of competing to score the most points, the players work together to score 4 points (or 5 for a harder difficulty) before running out of cards. The players score points by having one player give “clue” cards to the other player, to get them to guess which of the 6 cards on the board is the “secret” card.

Players alternate taking turns as the active player. The active player deals out 6 cards into the 6 card slots on the Dixit board. They then choose a card slot tile with a number on it to represent the “secret” card they want the other player to pick. They place this tile face down on the table.

Once the active player has selected the secret card. They may draw up to 7 cards from the deck, and select as many of these cards as they like to represent “clue” cards. These cards are given to the other player, and the rest are discarded. The active player is not allowed to give any other verbal or non-verbal hints or clues about what the secret card might be. The active player may also choose to give no clue cards, if giving any of the available cards would simply misdirect the other player.

The other player then uses the clue cards to guess the secret card. If they guess correctly, the players score a point. If they have enough points, they win! Otherwise, all the cards played on this turn are discarded, and the other player becomes the active player. Use one of the player score tokens to keep score on the point track on the Dixit board.

If the players do not have enough cards to play the 6 initial cards plus at least one clue card at the start of a turn, the game is over immediately. If the active player runs out of cards while drawing clue cards, the game ends after the current turn.

I actually like this variation much more than the original Dixit rules. I really like cooperative games, and this plays faster and simpler than the original, while still keeping the fuzzy-pattern matching that makes the original game fun.


Do Bigger Teams Use Fewer Technologies?

I speculated on Twitter that adding members to a software team reduces the number of new technologies that team can use. This is because unless all members are comfortable with a new technology, adding it can cause a Conway's-law-like split. For example, a system written in Java and C++ might split into two systems managed by two different teams, each exclusively using one language. The resulting teams would each have a less diverse technology stack, further contributing to the pattern. This speculation is based on my personal observations that smaller software teams tend to be more diverse in the primary languages and tools they use, while larger teams tend to be less so (i.e. "We use anything that works" vs "We're a Java shop"). 

To examine this a little closer, I first need to explain what I mean by a "team". A team is a group of people who succeed or fail together. If you can fail while I succeed, then we're not on the same team. What success means for a software team varies, but it's harder for members of a team to participate in the success of that team if they can't contribute equally. A simple way to achieve this is to have every person do/know everything. As I examined in Powers of Two, a pair of programmers can do this quite easily.

As a team grows, it's less likely that every person on the team will be able to use all technologies equally. A common reaction to this is to limit the number of technologies used, relying only on the ones that everyone on the team can use effectively. As more people are added, the only common technology becomes a single language or platform, and the primary prerequisite for joining the team is that you know this technology well.

Adding a new technology into this mix can cause the team to fracture along the technology boundary. If some programmers prefer working in one language/editor/platform or another, or have a preference for working on the type of problem that technology was adopted to solve, the goals of the individuals on the team can begin to diverge. Unless the team takes steps to resolve these conflicts, they will eventually lose a common definition of success.

As one sub-group starts to view success differently than the others, the pressure to split increases. The wider organization can respond to this by changing the org structure, by rolling back the addition of new technology, or by forcing the individuals to continue working as a single unit with divergent goals. In two of these three cases, you wind up with teams with a technology monoculture...the same result as if you had never introduced the new technology at all.

This idea is speculative, but I do think it's something you could measure empirically. If you could find a way to tease out the actual teams, Github organization data might have everything you need. You could see how many different languages each team regularly uses, and try to compare that to the size of the team to see if there's a pattern. I'll leave that exercise to the Github data mining experts.


Powers of Two

There are a few "best practices" that I've been able to do without, that I previously thought were absolutely essential. I would think that's a function of a few different factors, but I'm curious about one in particular.

I've worked on large and small teams before, but I'm currently working closely with just one other developer. I thought I'd try to list all the things that we don't have to do anymore, to see if there's any sort of process/value inflection point when you have exactly two developers.
 
For context, let me explain what we've been doing. It's not revolutionary, or even particularly interesting. If you squint it looks like XP.
 
We sit next to our users. It gets loud sometimes, but it's the best way to to stay in touch and understand what's going on.
 
We pair for about 6 hours a day, every day. Everything that's on the critical path is worked on in a pair. Always. Our goal is always to get the thing we're working on to production as fast as we responsibly can, and the best way I've found to that is with a pair.
 
We practice TDD. Our tests run fast (usually 1 second or less, for the whole suite) and we run them automatically on every change as we type. We generally test everything like this, except shell scripts, because we've never found a testing approach for scripts that we liked.
 
We refactor absolutely mercilessly. Every line of code has to have a purpose that relates directly back to value to the company. If you want to know what it is you can generally comment it out and see which test (exactly one test) fails. We don't go back and change things for the sake of changing them, though. Refactoring is never a standalone task, it's always done as part of adding new functionality. Our customers aren't aware if/when we refactor and they don't care, because it never impedes delivery.
 
We deploy first, and often. Step one in starting a new project is usually to deploy it. I find that figuring out how you're going to do that shapes the rest of the decisions you'll make. And every time we've made the system better we go to production, even if it's just one line of code. We have a test environment that's a reasonable mirror of our prod environment (including data) and we generally deploy there first.
 
Given all that, here's what we haven't been doing:
 
No formal backlog. We have three states for new features. Nownext, and probably never. Whatever we're working on now is the most valuable thing we can think of. Whatever's next is the next most valuable thing. When we pull new work, we ask "What's next?" and discuss. If someone comes to us with an idea, we ask "Is this more valuable that what we were planning to do next?" If not, it's usually forgotten, because by the time we finish that there's something else that's newer and better. But if it comes up again, maybe it'll make the cut.
 
No project managers/analysts. Our mentality on delivering software is that it's like running across a lake. If you keep moving fast, you'll keep moving. We assume that the value of our features are power-law distributed. There are a couple of things that really matter a lot (now and next), and everything else probably doesn't. We understand a lot about what is valuable to the company, and so the responsibility for finding the right tech<=>business fit best rests with us.
 
No estimate(s). We have one estimate: "That's too big" Other than that, we just get started and deliver incrementally. If something takes longer than a few days to deliver an increment, we regroup and make sure we're doing it right. We've only had a couple of instances where we needed to do something strategically that couldn't be broken up and took more than a few weeks.
 
No separate ops team. I get in a little earlier in the day and make sure nothing broke overnight. My coworker stays a little later, and tends to handle stuff that must be done after hours. We split overnight tasks as they come up. Anything that happens during the day, we both handle, or we split the pair temporarily and one person keeps coding.
 
No defect tracking. We fix bugs immediately. They're always the first priority, usually interrupting whatever we're doing. Or if a bug is not worth fixing, we change the alerting to reflect that. We have a pretty good monitoring system so our alerts are generally actionable and trustworthy. If you get an email there's a good chance you need to do something about it (fix it or silence it), and that happens right away.
 
No slow tests. All of our tests are fast tests. They run in a few milliseconds each and they generally test only a few lines of code at once. We try to avoid overlapping code with lots of different tests. It's a smell that you have too many branches in your code, and it makes refactoring difficult.
 
No integration tests. We use our test environment to explore the software and look for fast tests that we missed. We're firmly convinced this is something that should not be automated in any way....that's what the fast tests are for. If we have concerns about integration points we generally build those checks directly into the software and make it fail fast on deployment.
 
No CI/Build server. The master branch is both dev and production. We also use git as our deployment system (the old Heroku style), and so you're prevented from deploying without integrating first...which is rarely an issue anyway because we're always pairing.
 
No code reviews. Since we're pairing all the time, we both know everything there is to know about the code.
 
No formal documentation. Again, we have pairing, and tests, and well written code that we can both can read. We generally fully automate ops tasks, which serves as its own form of documentation. And as long as you we search through email and chat to fill in the rest, it hasn't been an issue.
 
Obviously, a lot of this works because of the context that we're in. But I can't help but wonder if there something more to it than just the context? Does having a team of two in an otherwise large organization let us skip a lot of otherwise necessary practices, or does it all just round down to "smaller teams are more efficient?"

Testing with FIRE

I forgot to put something on the Internet.
 
For almost 8 years now, I've held the belief that effective automated test suites have four essential attributes. These attributes have been referenced by other authors, and were the subject of a talk I gave at the Agile 2009 conference. But I was shocked to discover (that is, remember) that the only place they are formally documented is in my Continuous Testing book [Pragmatic Bookshelf, 2011], which is now out of date, out of print, and totally inaccessible to most of the Internet. 
 
And so now, we blog. 

Continue reading "Testing with FIRE" »


Serverless Web Apps: "Client-Service" Architecture Explained

My earlier piece on serverless auth seems to have gotten some attention from the Internet. 

In that post, I made a comparison to client-server architecture. I think that comparison is fair, but after discussing it with people I have a better way to explain it. If I abstract away the individual services from the diagram I used earlier, you can see it more plainly.

  ClientServiceArchitecture

You could call this architecture client-service, as opposed to client-server. It's based on a thick client that directly accesses web services, all sharing the same authentication credentials (provided by yet another service), which are used to authorize access to the services in a fine-grained way. This approach is made possible by the serverless technologies created by Amazon and other vendors.

While this idea wasn't exactly controversial, I did get a few questions from people who were skeptical that this was truly revolutionary (or even effective). So, in the rest of this post, I'd like to address a couple of the questions that were asked. 

Isn't putting all your business logic in JavaScript insecure?

The first thing to realize about this is that all clients are insecure. Just because you make a native app doesn't mean your application is immune to reverse engineering. If the security of your app depends on the fact that it's slightly more difficult for people to read your code on some platforms, you have a serious problem. So no matter what, if you're depending on client side logic to control what users can and can't do...well, I don't want to say you're doing it wrong, but I wouldn't want to use your app for anything important.

On the other hand, if you're worried about people reverse engineering your application's secret sauce, then by all means, don't put that part of your app's logic in the client. Unless you app is mostly made of sauce, this won't really be a problem. As I said in my earlier post, if some of your business logic needs to run on the server, microservices can be a simple, scalable way to do that in a serverless web app. Just keep in mind that you don't have to do this with everything.

Isn't this just OAuth and a thick client?

While both Auth0 and Cognito support OAuth identity providers, the capabilities of these services are beyond what you can do with just OAuth and a thick client. Many AWS services, including DynamoDB and S3, allow you to create fine-grained access control policies that use Cognito credentials to control what resources can be accessed by an authenticated user. For example, in DynamoDB, you can create a templated IAM policy that will limit access to individual records in a table based on the ID authenticated by Cognito. This means you can keep user data separate and secure while still keeping most of your business logic in the browser. The service itself can enforce the security constraints, and you don't have to use business logic to control who sees what.

Of course, using policy documents to control access isn't new. And federated authentication isn't new. And creating thick clients certainly isn't new. But bringing all these things together has created something new. As I show in my book, using this approach, you can create web applications at a fraction of the cost of a traditional load-balanced app server based design.


Ben Rady is the author of "Serverless Single Page Apps", published by the Pragmatic Bookshelf. If you've enjoyed this post and would like to read more, you should follow him on Twitter.

The Real Revolution of Serverless is Auth, Not Microservices

Serverless computing has been getting a lot of attention lately. There are frameworks, a conference, some notable blog posts, and a few books (including mine). I'm really glad to see this happening. I think serverless web apps can be incredibly powerful. As an industry we have a lot of room to grow, especially when it comes to scalability and reliability.

One thing I'm concerned about, however, is that some people seem to be conflating serverless architectures with microservice architectures, often based on AWS Lambda. While there is some overlap, these are not at all the same thing. When it comes to serverless web apps, the true innovation of serverless computing isn't microservices, it's auth.

Continue reading "The Real Revolution of Serverless is Auth, Not Microservices" »


Serverless Single Page Apps, In Print at The Pragmatic Bookshelf

I've been working on a book that explains the style of single page app that I've been building for the last few years. Up until very recently, I couldn't find a way to use this style for public-facing apps, because the infrastructure required wasn't generally available. Now, thanks to AWS, it is...making "Serverless" single page apps accessible to billions of desktop, tablet, and mobile devices around the world. This book is the synthesis of years of work in many different areas, and I couldn't be happier to have it available in print (and PDF, of course).

It's also (currently) the #1 new release in Mobile App Development & Programming on Amazon.com.

Screen Shot 2016-06-29 at 9.15.36 PM

So that's pretty great. There are other great books on this topic, and I'm happy to see so many people interested in these ideas.


Stop Calling It Theft: Thoughts on TheDAO

Like many people involved in Ethereum, my attention has been thoroughly captured by the recent events surrounding TheDAO. As an Ethereum miner, I have a little stake in this game. The reentrancy vulnerability found in TheDAO smart contract has resulted in a single actor draining the ether contributed to TheDAO smart contract (3.6 million of 11.5 million, so far, as the process in ongoing).

Since the mechanism being used here is a child DAO, the funds won't be available for transfer out of that account for another 27 days. In the meantime, a soft fork has been proposed that would block that transfer, allowing for the funds to be recovered and potentially redistributed to DAO token holders. After considering the arguments on both sides of this issue, and thinking about the role of Ethereum in a future economy full of digital assets, I've come to the conclusion that I am strongly opposed to this idea.

If Ethereum is to become what it purports to be, even considering this fork is a toxic solution to the problem. While I could go into discussions about the rule of law, or decentralized political systems, I think the best way to explain my position is an idea that most gamers will find familiar: If the game lets you do it, then it's not cheating.

The Ethereum foundation should take steps to prevent this kind of problem in the future. Those steps could even include a hard fork, or changes in the Ethereum roadmap. Perhaps making it so that a single contract can't hold such a large percentage of ether would be a good idea. I'm sure that in the coming months, people will have learned many lessons from this experience...lessons that can be applied to make the network stronger. But it wasn't the Ethereum network that was attacked here.

Although it will be very painful outcome for many people, the Ethereum network worked exactly as intended. TheDAO contract writers tried to play the game and they lost. It turns out that TheDAO was actually just a $160m security audit bounty. Instead of calling the new owner of TheDAO's ether a thief, we should be congratulating them on a game well played. Changing the rules in the middle of the game sets the very dangerous precedent of saying that the behavior of the network is not determined by code, nor even by laws, but simply by the majority consensus of it's participants. Any action taken on the Ethereum network going forward may be retroactively overridden by what is essentially mob rule.

Ethereum has the potential to move us into a new age of human organization. From tyranny and monarchy, to the rule of law, and then to the rule of code. Instead of killing all the lawyers, we can just make their work partially obsolete. But if we make this choice now, of retroactive rule by popular opinion, hope of reaching that future with Ethereum will be critically undermined. While we may be able to recover the ether, the trust we lose will come at a far greater cost.


One Second Services

Microservices have problems. Monoliths have problems. How do you wind up in a happy middle? Here's what I do.

As I talked about in my new book, I'm skeptical of starting systems with a microservice architecture. Splitting a new system across a bunch of different services presumes you'll know how to organize things up front, and lots of little microservices can make refactoring difficult. 
 
So I start with a small monolith. As I build, I add tests. My tests run very fast...hundreds of tests per second. I run the tests automatically on every change to the code, so speed is essential. 
 
When the entire test suite for the monolith starts creeping up into the 800-900ms range, I start to notice the time it takes to run the tests, and then I know it's time for this monolith to split into two smaller services. By then, I usually know enough about the system to know how to split it cleanly, and because I have a good test suite, refactoring it into this shape is easy. 
 
It usually doesn't split in half...80/20 is more common, but it's enough to speed my tests up again. After that, I just keep growing the services and splitting as necessary. The last system I built like this wound up with dozens of services, each with dozens or hundreds of tests which never take more than a second to run. Right in the Goldilocks Zone, if you ask me.

Candy or Death: The Automatic Halloween Candy Dispenser

Let's start with a word problem. Assume you live in a busy trick-or-treating neighborhood and that, on average, a group of four rings your doorbell every minute and takes 1/2 oz of candy per person. If you leave a bowl full of 2 lbs candy on your front step, how much time will elapse before it will all be gone?

Answer: It's a trick question. Given that that the NIST's strontium lattice clock, the most precise clock in the world, is only capable of measuring time in femtoseconds, nobody knows how long it takes. Humanity has no device capable of measuring the infinitesimal amount of time it takes for unattended candy to disappear from a doorstep on Halloween.

To work around this problem, I decided to build a device to hand out candy on Halloween in a more...civilized...manner.

Continue reading "Candy or Death: The Automatic Halloween Candy Dispenser" »