There are a few "best practices" that I've been able to do without, that I previously thought were absolutely essential. I would think that's a function of a few different factors, but I'm curious about one in particular.
I've worked on large and small teams before, but I'm currently working closely with just one other developer. I thought I'd try to list all the things that we don't have to do anymore, to see if there's any sort of process/value inflection point when you have exactly two developers.
For context, let me explain what we've been doing. It's not revolutionary, or even particularly interesting. If you squint it looks like XP.
We sit next to our users. It gets loud sometimes, but it's the best way to to stay in touch and understand what's going on.
We pair for about 6 hours a day, every day. Everything that's on the critical path is worked on in a pair. Always. Our goal is always to get the thing we're working on to production as fast as we responsibly can, and the best way I've found to that is with a pair.
We practice TDD. Our tests run fast (usually 1 second or less, for the whole suite) and we run them automatically on every change as we type. We generally test everything like this, except shell scripts, because we've never found a testing approach for scripts that we liked.
We refactor absolutely mercilessly. Every line of code has to have a purpose that relates directly back to value to the company. If you want to know what it is you can generally comment it out and see which test (exactly one test) fails. We don't go back and change things for the sake of changing them, though. Refactoring is never a standalone task, it's always done as part of adding new functionality. Our customers aren't aware if/when we refactor and they don't care, because it never impedes delivery.
We deploy first, and often. Step one in starting a new project is usually to deploy it. I find that figuring out how you're going to do that shapes the rest of the decisions you'll make. And every time we've made the system better we go to production, even if it's just one line of code. We have a test environment that's a reasonable mirror of our prod environment (including data) and we generally deploy there first.
Given all that, here's what we haven't been doing:
No formal backlog. We have three states for new features. Now, next, and probably never. Whatever we're working on now is the most valuable thing we can think of. Whatever's next is the next most valuable thing. When we pull new work, we ask "What's next?" and discuss. If someone comes to us with an idea, we ask "Is this more valuable that what we were planning to do next?" If not, it's usually forgotten, because by the time we finish that there's something else that's newer and better. But if it comes up again, maybe it'll make the cut.
No project managers/analysts. Our mentality on delivering software is that it's like running across a lake. If you keep moving fast, you'll keep moving. We assume that the value of our features are power-law distributed. There are a couple of things that really matter a lot (now and next), and everything else probably doesn't. We understand a lot about what is valuable to the company, and so the responsibility for finding the right tech<=>business fit best rests with us.
No estimate(s). We have one estimate: "That's too big" Other than that, we just get started and deliver incrementally. If something takes longer than a few days to deliver an increment, we regroup and make sure we're doing it right. We've only had a couple of instances where we needed to do something strategically that couldn't be broken up and took more than a few weeks.
No separate ops team. I get in a little earlier in the day and make sure nothing broke overnight. My coworker stays a little later, and tends to handle stuff that must be done after hours. We split overnight tasks as they come up. Anything that happens during the day, we both handle, or we split the pair temporarily and one person keeps coding.
No defect tracking. We fix bugs immediately. They're always the first priority, usually interrupting whatever we're doing. Or if a bug is not worth fixing, we change the alerting to reflect that. We have a pretty good monitoring system so our alerts are generally actionable and trustworthy. If you get an email there's a good chance you need to do something about it (fix it or silence it), and that happens right away.
No slow tests. All of our tests are fast tests. They run in a few milliseconds each and they generally test only a few lines of code at once. We try to avoid overlapping code with lots of different tests. It's a smell that you have too many branches in your code, and it makes refactoring difficult.
No integration tests. We use our test environment to explore the software and look for fast tests that we missed. We're firmly convinced this is something that should not be automated in any way....that's what the fast tests are for. If we have concerns about integration points we generally build those checks directly into the software and make it fail fast on deployment.
No CI/Build server. The master branch is both dev and production. We also use git as our deployment system (the old Heroku style), and so you're prevented from deploying without integrating first...which is rarely an issue anyway because we're always pairing.
No code reviews. Since we're pairing all the time, we both know everything there is to know about the code.
No formal documentation. Again, we have pairing, and tests, and well written code that we can both can read. We generally fully automate ops tasks, which serves as its own form of documentation. And as long as you we search through email and chat to fill in the rest, it hasn't been an issue.
Obviously, a lot of this works because of the context that we're in. But I can't help but wonder if there something more to it than just the context? Does having a team of two in an otherwise large organization let us skip a lot of otherwise necessary practices, or does it all just round down to "smaller teams are more efficient?"
I forgot to put something on the Internet.
For almost 8 years now, I've held the belief that effective automated test suites have four essential attributes. These attributes have been referenced by other authors, and were the subject of a talk I gave at the Agile 2009 conference. But I was shocked to discover (that is, remember) that the only place they are formally documented is in my Continuous Testing book [Pragmatic Bookshelf, 2011], which is now out of date, out of print, and totally inaccessible to most of the Internet.
And so now, we blog.
My earlier piece on serverless auth seems to have gotten some attention from the Internet.
In that post, I made a comparison to client-server architecture. I think that comparison is fair, but after discussing it with people I have a better way to explain it. If I abstract away the individual services from the diagram I used earlier, you can see it more plainly.
You could call this architecture client-service, as opposed to client-server. It's based on a thick client that directly accesses web services, all sharing the same authentication credentials (provided by yet another service), which are used to authorize access to the services in a fine-grained way. This approach is made possible by the serverless technologies created by Amazon and other vendors.
While this idea wasn't exactly controversial, I did get a few questions from people who were skeptical that this was truly revolutionary (or even effective). So, in the rest of this post, I'd like to address a couple of the questions that were asked.
The first thing to realize about this is that all clients are insecure. Just because you make a native app doesn't mean your application is immune to reverse engineering. If the security of your app depends on the fact that it's slightly more difficult for people to read your code on some platforms, you have a serious problem. So no matter what, if you're depending on client side logic to control what users can and can't do...well, I don't want to say you're doing it wrong, but I wouldn't want to use your app for anything important.
On the other hand, if you're worried about people reverse engineering your application's secret sauce, then by all means, don't put that part of your app's logic in the client. Unless you app is mostly made of sauce, this won't really be a problem. As I said in my earlier post, if some of your business logic needs to run on the server, microservices can be a simple, scalable way to do that in a serverless web app. Just keep in mind that you don't have to do this with everything.
Isn't this just OAuth and a thick client?
While both Auth0 and Cognito support OAuth identity providers, the capabilities of these services are beyond what you can do with just OAuth and a thick client. Many AWS services, including DynamoDB and S3, allow you to create fine-grained access control policies that use Cognito credentials to control what resources can be accessed by an authenticated user. For example, in DynamoDB, you can create a templated IAM policy that will limit access to individual records in a table based on the ID authenticated by Cognito. This means you can keep user data separate and secure while still keeping most of your business logic in the browser. The service itself can enforce the security constraints, and you don't have to use business logic to control who sees what.
Of course, using policy documents to control access isn't new. And federated authentication isn't new. And creating thick clients certainly isn't new. But bringing all these things together has created something new. As I show in my book, using this approach, you can create web applications at a fraction of the cost of a traditional load-balanced app server based design.
Serverless computing has been getting a lot of attention lately. There are frameworks, a conference, some notable blog posts, and a few books (including mine). I'm really glad to see this happening. I think serverless web apps can be incredibly powerful. As an industry we have a lot of room to grow, especially when it comes to scalability and reliability.
One thing I'm concerned about, however, is that some people seem to be conflating serverless architectures with microservice architectures, often based on AWS Lambda. While there is some overlap, these are not at all the same thing. When it comes to serverless web apps, the true innovation of serverless computing isn't microservices, it's auth.
I've been working on a book that explains the style of single page app that I've been building for the last few years. Up until very recently, I couldn't find a way to use this style for public-facing apps, because the infrastructure required wasn't generally available. Now, thanks to AWS, it is...making "Serverless" single page apps accessible to billions of desktop, tablet, and mobile devices around the world. This book is the synthesis of years of work in many different areas, and I couldn't be happier to have it available in print (and PDF, of course).
It's also (currently) the #1 new release in Mobile App Development & Programming on Amazon.com.
So that's pretty great. There are other great books on this topic, and I'm happy to see so many people interested in these ideas.
Like many people involved in Ethereum, my attention has been thoroughly captured by the recent events surrounding TheDAO. As an Ethereum miner, I have a little stake in this game. The reentrancy vulnerability found in TheDAO smart contract has resulted in a single actor draining the ether contributed to TheDAO smart contract (3.6 million of 11.5 million, so far, as the process in ongoing).
Since the mechanism being used here is a child DAO, the funds won't be available for transfer out of that account for another 27 days. In the meantime, a soft fork has been proposed that would block that transfer, allowing for the funds to be recovered and potentially redistributed to DAO token holders. After considering the arguments on both sides of this issue, and thinking about the role of Ethereum in a future economy full of digital assets, I've come to the conclusion that I am strongly opposed to this idea.
If Ethereum is to become what it purports to be, even considering this fork is a toxic solution to the problem. While I could go into discussions about the rule of law, or decentralized political systems, I think the best way to explain my position is an idea that most gamers will find familiar: If the game lets you do it, then it's not cheating.
The Ethereum foundation should take steps to prevent this kind of problem in the future. Those steps could even include a hard fork, or changes in the Ethereum roadmap. Perhaps making it so that a single contract can't hold such a large percentage of ether would be a good idea. I'm sure that in the coming months, people will have learned many lessons from this experience...lessons that can be applied to make the network stronger. But it wasn't the Ethereum network that was attacked here.
Although it will be very painful outcome for many people, the Ethereum network worked exactly as intended. TheDAO contract writers tried to play the game and they lost. It turns out that TheDAO was actually just a $160m security audit bounty. Instead of calling the new owner of TheDAO's ether a thief, we should be congratulating them on a game well played. Changing the rules in the middle of the game sets the very dangerous precedent of saying that the behavior of the network is not determined by code, nor even by laws, but simply by the majority consensus of it's participants. Any action taken on the Ethereum network going forward may be retroactively overridden by what is essentially mob rule.
Ethereum has the potential to move us into a new age of human organization. From tyranny and monarchy, to the rule of law, and then to the rule of code. Instead of killing all the lawyers, we can just make their work partially obsolete. But if we make this choice now, of retroactive rule by popular opinion, hope of reaching that future with Ethereum will be critically undermined. While we may be able to recover the ether, the trust we lose will come at a far greater cost.
Microservices have problems. Monoliths have problems. How do you wind up in a happy middle? Here's what I do.
As I talked about in my new book, I'm skeptical of starting systems with a microservice architecture. Splitting a new system across a bunch of different services presumes you'll know how to organize things up front, and lots of little microservices can make refactoring difficult.
So I start with a small monolith. As I build, I add tests. My tests run very fast...hundreds of tests per second. I run the tests automatically on every change to the code, so speed is essential.
When the entire test suite for the monolith starts creeping up into the 800-900ms range, I start to notice the time it takes to run the tests, and then I know it's time for this monolith to split into two smaller services. By then, I usually know enough about the system to know how to split it cleanly, and because I have a good test suite, refactoring it into this shape is easy.
It usually doesn't split in half...80/20 is more common, but it's enough to speed my tests up again. After that, I just keep growing the services and splitting as necessary. The last system I built like this wound up with dozens of services, each with dozens or hundreds of tests which never take more than a second to run. Right in the Goldilocks Zone, if you ask me.
Let's start with a word problem. Assume you live in a busy trick-or-treating neighborhood and that, on average, a group of four rings your doorbell every minute and takes 1/2 oz of candy per person. If you leave a bowl full of 2 lbs candy on your front step, how much time will elapse before it will all be gone?
Answer: It's a trick question. Given that that the NIST's strontium lattice clock, the most precise clock in the world, is only capable of measuring time in femtoseconds, nobody knows how long it takes. Humanity has no device capable of measuring the infinitesimal amount of time it takes for unattended candy to disappear from a doorstep on Halloween.
To work around this problem, I decided to build a device to hand out candy on Halloween in a more...civilized...manner.
When I moved into my house, the mailbox was in pretty sorry shape. It was corroded, and the mail flap was stuck open. On top of that, it had an integrated doorbell that didn't work. Lastly, the entire border of the mailbox was covered with an ugly and aging caulk job, complete with rotting natural fiber insulation that had been there since god-knows-when.
The first thing I did was try to get the doorbell working. Here you can see the original cloth insulated wires that were probably installed when the house was built in 1929.
Surprisingly, the wires still seemed to be connected, and I found where they came out in the basement. So after replacing the button with a metal momentary pushbutton, I tried building a simple doorbell using an XBee radio and connecting it to the wires. I had a Raspberry Pi XBee basestation that I use for other projects, and I wrote a simple script to send a Pushover notification to my phone whenever someone pressed the button.
Unfortunately, there was an intermittent short in the wires. After getting 10 or so spurious doorbell notifications on the first day, I knew I had to take more drastic measures.
I found the Honeywell RCWL300A wireless doorbell on Amazon. I ordered it hoping that I could modify it to be activated by the doorbell that I had already bought, since I didn't want to just stick a big plastic button on the outside of my house. I opened it up and started tracing the circuit with the macro lens I have for my iPhone.
There were 5 solder points on the board, and I found two of them seemed to be connected to the doorbell button. Using a couple of leads, I confirmed that connecting the 2nd and 4th pins would trigger the doorbell. Huzzah! Then I soldered on a couple of wires, drilled two holes in the case, and then tucked the device up inside my mail slot. I then ran the wires up to the doorbell button, and I had a working doorbell (with a wireless chime to boot!).
Then I turned my attention to the mail slot itself. The first step was just to take it apart, and considering that it had completely rusted open, this was rather challenging. The mail slot cover moved on an axle that was either welded or glued into the surrounding cover.
I wound up just cutting it out with a dremel. I replaced it with a threaded rod and held it in place with a couple of nuts and lock washers. Then I took off all the corrosion with a combination of sandpaper and a wire brush wheel. Then I painted it with about 5 coats of Rust-o-leum Oil Rubbed Bronze metallic spray paint. I had my doubts about the paint adhereing to the surface, but it turned out pretty good.
Next, I replaced the natural fiber insulation with some closed cell spray foam, which worked pretty well, although it was really a mess to apply. I then repositioned the mounting screws to shift the mailbox up a bit to cover the gap.
Of course, like every project, there a still a few rough spots I'd like to clean up. But as you can see, the end result is a big improvement.
As a technologist, I often think about Marc Andreesen's assertion that software is eating the world. It's a very provocative statement, but I can't really disagree with it. Whether we like it or not, we are building a new society in which labor is devalued. Thought workers are quickly becoming the only essential employees for many organizations. The middle class, who up until now has been dependent on their ability to trade labor for capital, is being destroyed.
Hope has been offered by the idea that we may be building a "post-scarcity society." One in which trading your labor for subsistence is no longer necessary. If we are able to optimize to cost of everything down to free or nearly free, the proponents argue, we might wind up with a new society that looks something like Star Trek. And who wouldn't want to live in the Star Trek universe?
Creating a society that even remotely resembled the Star Trek universe would surely be mankind's greatest achievement. Neil DeGrasse-Tyson once looked into why civilizations do great things like that. He was trying to figure out how to rekindle interest in space exploration. He found that all civilizations throughout history have only ever done something great for one of three reasons:
- Defense (aka War)
In the canon of Star Trek, humanity's modern renaissance happened when we were first visited by the Vulcan race. Alerted to our existence by the first successful test of a warp drive, Vulcans landed on earth with a message of peace and friendship. The course of all of humanity was changed in an instant, because that event has _all three_ of the elements that DeGrasse-Tyson describes. The Vulcans represented a potential ally in a galaxy of previously unknown aggressors. They were a new conduit for trade and commerce, opening new markets and providing new technology. Finally, proof of the existence of an intelligent race other than humans, was for the bulk of humanity, something that completely reshaped their sense of self and spirit. If you doubt the religious significance of that event, consider this: Spock was only _half_ vulcan. If having a new species to breed with doesn't change your ideas about God, nothing will.
The thing that created Star Trek was not post-scarcity. Post-scarcity was the effect, not the cause, of human-extraterrestrial first contact. The Star Trek universe was created through the unification of all of humanity into a singular guiding goal: The exploration of space. That single event was so powerful as to bring about all the changes necessary for humanity to move past the industrial revolution, and view an individual's contribution of labor not as a prerequisite for societal approval, but as an inefficiency to be happily optimized away.
We don't have Vulcans. We have the Internet. And they are not the same thing.
While the Internet is born out of military roots, its effects are primarily economic. It does not have the transformative effect that contact with a sentient alien species would have. In absence of this, we have no reason to believe that the world we are building will, in any way, resemble the sci-fi fantasy that we all hope it would.
The world we are building does not have a powerful, unifying force behind it. It has only self-interest and the legacy of societal structures that are unable to deal with new realities. America, in particular, is culturally ill-equipped to handle these new realities. The new world we are building is much more likely to be a technological feudalism than it is to be a utopian commune. If we don't take steps to shape its direction now, we will not be given a second chance.