Serverless Web Apps: "Client-Service" Architecture Explained

My earlier piece on serverless auth seems to have gotten some attention from the Internet. 

In that post, I made a comparison to client-server architecture. I think that comparison is fair, but after discussing it with people I have a better way to explain it. If I abstract away the individual services from the diagram I used earlier, you can see it more plainly.


You could call this architecture client-service, as opposed to client-server. It's based on a thick client that directly accesses web services, all sharing the same authentication credentials (provided by yet another service), which are used to authorize access to the services in a fine-grained way. This approach is made possible by the serverless technologies created by Amazon and other vendors.

While this idea wasn't exactly controversial, I did get a few questions from people who were skeptical that this was truly revolutionary (or even effective). So, in the rest of this post, I'd like to address a couple of the questions that were asked. 

Isn't putting all your business logic in JavaScript insecure?

The first thing to realize about this is that all clients are insecure. Just because you make a native app doesn't mean your application is immune to reverse engineering. If the security of your app depends on the fact that it's slightly more difficult for people to read your code on some platforms, you have a serious problem. So no matter what, if you're depending on client side logic to control what users can and can't do...well, I don't want to say you're doing it wrong, but I wouldn't want to use your app for anything important.

On the other hand, if you're worried about people reverse engineering your application's secret sauce, then by all means, don't put that part of your app's logic in the client. Unless you app is mostly made of sauce, this won't really be a problem. As I said in my earlier post, if some of your business logic needs to run on the server, microservices can be a simple, scalable way to do that in a serverless web app. Just keep in mind that you don't have to do this with everything.

Isn't this just OAuth and a thick client?

While both Auth0 and Cognito support OAuth identity providers, the capabilities of these services are beyond what you can do with just OAuth and a thick client. Many AWS services, including DynamoDB and S3, allow you to create fine-grained access control policies that use Cognito credentials to control what resources can be accessed by an authenticated user. For example, in DynamoDB, you can create a templated IAM policy that will limit access to individual records in a table based on the ID authenticated by Cognito. This means you can keep user data separate and secure while still keeping most of your business logic in the browser. The service itself can enforce the security constraints, and you don't have to use business logic to control who sees what.

Of course, using policy documents to control access isn't new. And federated authentication isn't new. And creating thick clients certainly isn't new. But bringing all these things together has created something new. As I show in my book, using this approach, you can create web applications at a fraction of the cost of a traditional load-balanced app server based design.

Ben Rady is the author of "Serverless Single Page Apps", published by the Pragmatic Bookshelf. If you've enjoyed this post and would like to read more, you should follow him on Twitter.

The Real Revolution of Serverless is Auth, Not Microservices

Serverless computing has been getting a lot of attention lately. There are frameworks, a conference, some notable blog posts, and a few books (including mine). I'm really glad to see this happening. I think serverless web apps can be incredibly powerful. As an industry we have a lot of room to grow, especially when it comes to scalability and reliability.

One thing I'm concerned about, however, is that some people seem to be conflating serverless architectures with microservice architectures, often based on AWS Lambda. While there is some overlap, these are not at all the same thing. When it comes to serverless web apps, the true innovation of serverless computing isn't microservices, it's auth.

Continue reading "The Real Revolution of Serverless is Auth, Not Microservices" »

Serverless Single Page Apps, In Print at The Pragmatic Bookshelf

I've been working on a book that explains the style of single page app that I've been building for the last few years. Up until very recently, I couldn't find a way to use this style for public-facing apps, because the infrastructure required wasn't generally available. Now, thanks to AWS, it is...making "Serverless" single page apps accessible to billions of desktop, tablet, and mobile devices around the world. This book is the synthesis of years of work in many different areas, and I couldn't be happier to have it available in print (and PDF, of course).

It's also (currently) the #1 new release in Mobile App Development & Programming on

Screen Shot 2016-06-29 at 9.15.36 PM

So that's pretty great. There are other great books on this topic, and I'm happy to see so many people interested in these ideas.

Stop Calling It Theft: Thoughts on TheDAO

Like many people involved in Ethereum, my attention has been thoroughly captured by the recent events surrounding TheDAO. As an Ethereum miner, I have a little stake in this game. The reentrancy vulnerability found in TheDAO smart contract has resulted in a single actor draining the ether contributed to TheDAO smart contract (3.6 million of 11.5 million, so far, as the process in ongoing).

Since the mechanism being used here is a child DAO, the funds won't be available for transfer out of that account for another 27 days. In the meantime, a soft fork has been proposed that would block that transfer, allowing for the funds to be recovered and potentially redistributed to DAO token holders. After considering the arguments on both sides of this issue, and thinking about the role of Ethereum in a future economy full of digital assets, I've come to the conclusion that I am strongly opposed to this idea.

If Ethereum is to become what it purports to be, even considering this fork is a toxic solution to the problem. While I could go into discussions about the rule of law, or decentralized political systems, I think the best way to explain my position is an idea that most gamers will find familiar: If the game lets you do it, then it's not cheating.

The Ethereum foundation should take steps to prevent this kind of problem in the future. Those steps could even include a hard fork, or changes in the Ethereum roadmap. Perhaps making it so that a single contract can't hold such a large percentage of ether would be a good idea. I'm sure that in the coming months, people will have learned many lessons from this experience...lessons that can be applied to make the network stronger. But it wasn't the Ethereum network that was attacked here.

Although it will be very painful outcome for many people, the Ethereum network worked exactly as intended. TheDAO contract writers tried to play the game and they lost. It turns out that TheDAO was actually just a $160m security audit bounty. Instead of calling the new owner of TheDAO's ether a thief, we should be congratulating them on a game well played. Changing the rules in the middle of the game sets the very dangerous precedent of saying that the behavior of the network is not determined by code, nor even by laws, but simply by the majority consensus of it's participants. Any action taken on the Ethereum network going forward may be retroactively overridden by what is essentially mob rule.

Ethereum has the potential to move us into a new age of human organization. From tyranny and monarchy, to the rule of law, and then to the rule of code. Instead of killing all the lawyers, we can just make their work partially obsolete. But if we make this choice now, of retroactive rule by popular opinion, hope of reaching that future with Ethereum will be critically undermined. While we may be able to recover the ether, the trust we lose will come at a far greater cost.

One Second Services

Microservices have problems. Monoliths have problems. How do you wind up in a happy middle? Here's what I do.

As I talked about in my new book, I'm skeptical of starting systems with a microservice architecture. Splitting a new system across a bunch of different services presumes you'll know how to organize things up front, and lots of little microservices can make refactoring difficult. 
So I start with a small monolith. As I build, I add tests. My tests run very fast...hundreds of tests per second. I run the tests automatically on every change to the code, so speed is essential. 
When the entire test suite for the monolith starts creeping up into the 800-900ms range, I start to notice the time it takes to run the tests, and then I know it's time for this monolith to split into two smaller services. By then, I usually know enough about the system to know how to split it cleanly, and because I have a good test suite, refactoring it into this shape is easy. 
It usually doesn't split in half...80/20 is more common, but it's enough to speed my tests up again. After that, I just keep growing the services and splitting as necessary. The last system I built like this wound up with dozens of services, each with dozens or hundreds of tests which never take more than a second to run. Right in the Goldilocks Zone, if you ask me.

Candy or Death: The Automatic Halloween Candy Dispenser

Let's start with a word problem. Assume you live in a busy trick-or-treating neighborhood and that, on average, a group of four rings your doorbell every minute and takes 1/2 oz of candy per person. If you leave a bowl full of 2 lbs candy on your front step, how much time will elapse before it will all be gone?

Answer: It's a trick question. Given that that the NIST's strontium lattice clock, the most precise clock in the world, is only capable of measuring time in femtoseconds, nobody knows how long it takes. Humanity has no device capable of measuring the infinitesimal amount of time it takes for unattended candy to disappear from a doorstep on Halloween.

To work around this problem, I decided to build a device to hand out candy on Halloween in a more...civilized...manner.

Continue reading "Candy or Death: The Automatic Halloween Candy Dispenser" »

Refurbishing a Mail Slot and Doobell

When I moved into my house, the mailbox was in pretty sorry shape. It was corroded, and the mail flap was stuck open. On top of that, it had an integrated doorbell that didn't work. Lastly, the entire border of the mailbox was covered with an ugly and aging caulk job, complete with rotting natural fiber insulation that had been there since god-knows-when.

2013-11-19 12.24.01

The first thing I did was try to get the doorbell working. Here you can see the original cloth insulated wires that were probably installed when the house was built in 1929. 
2013-11-19 12.25.14
Surprisingly, the wires still seemed to be connected, and I found where they came out in the basement. So after replacing the button with a metal momentary pushbutton, I tried building a simple doorbell using an XBee radio and connecting it to the wires. I had a Raspberry Pi XBee basestation that I use for other projects, and I wrote a simple script to send a Pushover notification to my phone whenever someone pressed the button.
2013-07-15 21.48.43
Unfortunately, there was an intermittent short in the wires. After getting 10 or so spurious doorbell notifications on the first day, I knew I had to take more drastic measures.
I found the Honeywell RCWL300A wireless doorbell on Amazon. I ordered it hoping that I could modify it to be activated by the doorbell that I had already bought, since I didn't want to just stick a big plastic button on the outside of my house. I opened it up and started tracing the circuit with the macro lens I have for my iPhone.
2014-08-29 19.10.38
There were 5 solder points on the board, and I found two of them seemed to be connected to the doorbell button. Using a couple of leads, I confirmed that connecting the 2nd and 4th pins would trigger the doorbell. Huzzah! Then I soldered on a couple of wires, drilled two holes in the case, and then tucked the device up inside my mail slot. I then ran the wires up to the doorbell button, and I had a working doorbell (with a wireless chime to boot!).
2014-09-01 13.39.09
Then I turned my attention to the mail slot itself. The first step was just to take it apart, and considering that it had completely rusted open, this was rather challenging. The mail slot cover moved on an axle that was either welded or glued into the surrounding cover.
2014-08-18 18.29.02
I wound up just cutting it out with a dremel. I replaced it with a threaded rod and held it in place with a couple of nuts and lock washers. Then I took off all the corrosion with a combination of sandpaper and a wire brush wheel. Then I painted it with about 5 coats of Rust-o-leum Oil Rubbed Bronze metallic spray paint. I had my doubts about the paint adhereing to the surface, but it turned out pretty good.
2014-09-01 14.51.20
Next, I replaced the natural fiber insulation with some closed cell spray foam, which worked pretty well, although it was really a mess to apply. I then repositioned the mounting screws to shift the mailbox up a bit to cover the gap. 
2014-09-14 15.10.13
Of course, like every project, there a still a few rough spots I'd like to clean up. But as you can see, the end result is a big improvement. 


Why The Post Scarcity Society Will Not Be Star Trek

As a technologist, I often think about Marc Andreesen's assertion that software is eating the world. It's a very provocative statement, but I can't really disagree with it. Whether we like it or not, we are building a new society in which labor is devalued. Thought workers are quickly becoming the only essential employees for many organizations. The middle class, who up until now has been dependent on their ability to trade labor for capital, is being destroyed.

Hope has been offered by the idea that we may be building a "post-scarcity society." One in which trading your labor for subsistence is no longer necessary. If we are able to optimize to cost of everything down to free or nearly free, the proponents argue, we might wind up with a new society that looks something like Star Trek. And who wouldn't want to live in the Star Trek universe?

Creating a society that even remotely resembled the Star Trek universe would surely be mankind's greatest achievement. Neil DeGrasse-Tyson once looked into why civilizations do great things like that. He was trying to figure out how to rekindle interest in space exploration. He found that all civilizations throughout history have only ever done something great for one of three reasons:

  • Defense (aka War)
  • Economics
  • Religion

In the canon of Star Trek, humanity's modern renaissance happened when we were first visited by the Vulcan race. Alerted to our existence by the first successful test of a warp drive, Vulcans landed on earth with a message of peace and friendship. The course of all of humanity was changed in an instant, because that event has _all three_ of the elements that DeGrasse-Tyson describes. The Vulcans represented a potential ally in a galaxy of previously unknown aggressors. They were a new conduit for trade and commerce, opening new markets and providing new technology. Finally, proof of the existence of an intelligent race other than humans, was for the bulk of humanity, something that completely reshaped their sense of self and spirit. If you doubt the religious significance of that event, consider this: Spock was only _half_ vulcan. If having a new species to breed with doesn't change your ideas about God, nothing will.

The thing that created Star Trek was not post-scarcity. Post-scarcity was the effect, not the cause, of human-extraterrestrial first contact. The Star Trek universe was created through the unification of all of humanity into a singular guiding goal: The exploration of space. That single event was so powerful as to bring about all the changes necessary for humanity to move past the industrial revolution, and view an individual's contribution of labor not as a prerequisite for societal approval, but as an inefficiency to be happily optimized away.

We don't have Vulcans. We have the Internet. And they are not the same thing.

While the Internet is born out of military roots, its effects are primarily economic. It does not have the transformative effect that contact with a sentient alien species would have. In absence of this, we have no reason to believe that the world we are building will, in any way, resemble the sci-fi fantasy that we all hope it would.

The world we are building does not have a powerful, unifying force behind it. It has only self-interest and the legacy of societal structures that are unable to deal with new realities. America, in particular, is culturally ill-equipped to handle these new realities. The new world we are building is much more likely to be a technological feudalism than it is to be a utopian commune. If we don't take steps to shape its direction now, we will not be given a second chance.

Vim's undo list isn't a list. It's a tree.

Vim's undo list isn't a list. It's a tree...meaning that it keeps track of all the edits you make after having "undone/redone" something. Putting this power to use can be a bit daunting, unless you keep a couple of simple vim commands handy.

First, let's create an example to work with. Make a new buffer and type three things (switching back to normal mode after each line to produce three separate changes). You should wind up with something like this:


Now let's say I undo the change that created "third", and then change "second" to "2nd". Now I have this:


You can undo and redo to remove and re-add "second" and "first", but there's no way to bring "third" back, right? In most editors, it would be lost.

Actually, in Vim, there's at least three ways to get it back.

The :earlier and :later commands will move you back and forward in time across the undo tree. It's basically a time machine built into vim. At this point, to bring back "third" we just need to use the 'earlier' command, like so:

:earlier 1

That will bring us back one edit in time (rather than in the undo/redo path) leaving a buffer that looks like this:


If you prefer using 'g' rather than command mode, g+ and g- from normal mode will move you across the edit tree one step forward or backward, respectively.

And of course, no self-respecting time machine would be complete without time, so :earlier and :later both take time as an argument as well. To jump back to the state of your code 30 seconds ago, just type this

:earlier 30s

It works with [m]inutes, [h]ours, and [d]ays too. Being able to jump back and forth between changes I was making days ago, in just a few keystrokes, is just one of the many reasons I love Vim.

Ben Rady is the author of "Continuous Testing with Ruby, Rails, and JavaScript". If you've enjoyed this post and would like to read more, you should follow him on Twitter.

Lines of Code is the Best Software Productivity Metric

Lines of code is a great metric for productivity. Not only is it not broken, I would argue that's it's clearly the best. The important question to ask about this metric is "How does programmer productivity relate to value delivered to a customer?"

If you want to measure what programmers produce, lines of code added is the only metric that makes sense. Firstly, it's objective. It's easily obtained from source code repository logs. Many existing tools already can measure and track it. It can be applied to almost any programming language. Finally, because their workflows are (usually) so highly automated, they're able to measure their work product far more accurately than most. And while there are other things that programmers create (emails, documents, meeting invites, coffee stains...) by definition they write code. They are uniquely able to solve problems by writing software, so measuring the production of that software makes sense.

So if we can measure productivity this way, why doesn't it seem to be useful when we do? In manufacturing, higher rates of production almost always leads to more value. I know that all you Lean Production advocates are yelling and screaming about inventory management and muda right now, but if you were producing and selling 1000 things a day and now all the sudden you can produce 2000 of them at the same cost, there's net value there. If you can find a counterexample, please go ahead and produce those extra 1000 things and give them to me. I'll sell them on Amazon.

Software does not work like this at all. If your team was producing 1000 lines of code a day, and now all the sudden they're producing 2000, you really have no idea what impact this has on value delivered. Are you creating twice as many features? Are you refactoring to pay off technical debt? Or are you just thrashing around trying to find a design that fits your domain? Maybe you're adding features, but not ones that users need. Maybe you're just making your system harder to use.

The reason for this productivity paradox is simple: Software is not an asset. It is a liability. From a financial standpoint, creating it is much more akin to leasing office space than it is to producing finished product. It represents an ongoing cost of doing business, that may or not actually result in any value being delivered to a customer. Lines of code in a codebase are liabilities that you have to write, test, document, compile, deploy, etc....and the less of it you have for a given solution, the better.

Programmers solve problems, which can be assets. They often do so by creating software, thereby producing liabilities. In software, production is an expense. Competent programmers are able to create more value than cost when they do this, but not all programmers do. The best programmers can create value by removing code and the best solutions in software can often be achieved through simpler functionality rather than just more of it.

So there's nothing wrong with "lines of code" as a productivity metric. It's fine. You've been using it wrong. To understand the net value created by programmers, you have to look past productivity. You have to look how and why and when people are using the solutions you create. You have to create feedback loops that extend all the way to the point where value is actually delivered, and you have to find ways to make those feedback loops fast and responsive. Otherwise, you're going to spend your time optimizing for metrics that don't matter.

Ben Rady is the author of "Continuous Testing with Ruby, Rails, and JavaScript". If you've enjoyed this post and would like to read more, you should follow him on Twitter.