Enter: The Concurrency Problem. Moore's Law is starting to run out. We've all gotten very used to using hardware to solve scalability problems, and up until now, that's been a relatively easy thing to do. But chip manufacturers are starting to hit physical limits of transistor size and density. Now, the only way to scale up is to scale out by adding more cores, and that means concurrency. "But concurrent programming is hard!", they say. "And we have an army of semi-conscious Java developers in our company that wouldn't know a semaphore from a telegraph line! Oh noes! What will we do?"
The current fashionable solution to this problem are functional programming languages. They are the next big thing. Languages such as Scala, Clojure, Erlang, and Haskell are supposedly going to save us from the dread intermittent bugs and horrible deadlocks intrinsic in concurrent systems built with object oriented or procedural languages. The implication in this argument is that functional programming languages are more thread safe because (depending on the language) they either shy away from side effects or find ways to eliminate it completely.
Well, I'm sorry, but I'm just not buying it.
Don't get me wrong. Concurrent programming in a typical object oriented language is hard. Really hard. I've slogged through enough of it to learn to just avoid it unless you really, really need it. Especially when you're dealing with shared state. Michael Feathers once told me that he thought if threads had never been invented, and someone had proposed them at a conference today, they'd be laughed out of the building. He may have been kidding, but honestly, I would believe it.
My problem with the functional programming cure to this problem is that I haven't seen a cure yet that's better than the problem. Maybe it's me. Maybe my tiny Ben brain just can't comprehend the magic and majesty of OCaml, but every time I've tried to learn a new functional language, it's ended it disaster. First with Erlang, and then Haskell, I've now had two aborted attempts to understand the supposed cure to the concurrency problem. The languages are just too complex, too terse, and absolutely full of academic shitheaddery. I mean, seriously, what the hell.
If your concern with the death of Moore's law is that you won't be able to afford/find skilled programmers to scale your systems using multicore processors, these are not the droids you're looking for.
But even if you're got a bunch of really sharp developers on your team, I'm not sure this is the solution either. Message based architectures (especially asynchronous ones) are good, pretty easy to understand, and easy to test. Shared state in concurrent systems is a pain, and you shouldn't do that. Interprocess communication is relatively easy through various methods. Whether you're using an object oriented, pure functional, or hybrid language, these things are true. And while it might be possible to ignore thread safety problems by just working in a purely functional language, I think you're just trading one kind of complexity for another. And if you're working in a hybrid language, what assurances do you have, really, that your system is really thread safe? Indeed, the risk compentation effect might give you a false sense of security, giving you a system that actually had more concurrency problems that one written in a "dangerous" language.
I think the future of computing is people. Lots of people. Web applications, social networks, and millions (billions?) of mobile devices interacting over the net. I think the majority of these problems can be solved with multi-process, message passing architectures. In the end, a large scale system might wind up with one core for every 10 users. Maybe less. If multiple cores are the new gigahertz, I would not be the least surprised if computing gets that cheap. All it would have to do is follow the trend of the last 40 years, just along a different axis. Really, the only barrier to a process-oriented approach in that case would be memory, and that may not actually be a limitation at all.
So I think the cases where a functional programming language are really the best solution to the concurrency problem are, in fact, rather rare. It's worth noting that even after Twitter raised eyebrows by moving their backend processing from Ruby to Scala, they "reverted" to Java for some of the concurrent operations:
initially we had a number of actors inside the system...over time as we ran more and more system tests on it, we found that actors weren’t necessarily the ideal concurrency model for all parts of that system. Some parts of the concurrency model of that system are still actor based....but for other parts we’ve just gone back to a traditional Java threading model. The engineer working on that, John Kalucki, just found it was a little bit easier to test, a bit more predictable.
So I need someone to explain this to me. The state of the art in functional languages scare me a lot more than intelligently-designed concurrent programming, but everyone is hopping on the bandwagon, so I feel I must be doing something wrong. So, in typical fashion, I've made a heretical claim in the hopes that someone will show me the light and prove me wrong. Because I seriously don't get it.