Concurrency: Are We Focusing On the Wrong Thing?
It is fairly widely accepted that concurrent programming is becoming more important, and that this trend will only accelerate over coming years. This can be traced back to Herb Sutter’s excellent article The Free Lunch Is Over, which elaborates on how we hit the wall in terms of CPU clock speed around 2003, and further CPU performance has concentrated on hyperthreading and multicore ever since. The figure used in that article is such a dramatic illustration that I’ve seen it used dozens of times in talks and articles since:
But something bothers me in all the talk about concurrency, and I think it derives from focusing too much on speed. It is true that CPU-bound applications no longer benefit from the “free lunch” of increasing clock speeds. However, I would contend that the vast majority of applications are not CPU bound. Further, many applications that do need a lot of CPU power are already naturally parallelisable: such as the fast-growing category of multi-user web applications.
So although clock speeds have stagnated, for many programmers it won’t have any real performance impact. Most desktop applications will get along just fine, still bound on I/O or a squishy, organic, slow-moving user. Web applications will naturally take advantage of more cores as they appear. The only programmers likely to notice the performance impact are those working in areas where performance has always been at the bleeding edge, which frankly is a small minority.
This doesn’t mean we’re all off the hook, though. Even though few of us write CPU-bound code, many more of us write multithreaded code. Badly. The only reason our code works is because it isn’t run on truly multi-processor systems, and certainly not on systems with dozens of CPUs (or more). Whole classes of concurrency bugs don’t show up until code is stressed on highly parallelised hardware. And it just gets worse as more CPUs or cores are added. We’ve got away with this for a while, but when every desktop has dozens of cores, ignorance will cease to be bliss.
Sutter’s article covers these points, but like most other discussions I think it both underplays them relative to the performance issues, and underestimates how difficult it is to learn concurrent programming. This is a much greater challenge than learning a design technique like OO — concurrent programs, at least those using current locking techniques, are insanely hard to reason about. The true solution, in the opinion of many (myself included), is to move away from shared memory and explicit locking as much as possible. This is a key reason why pure functional programming is seeing a resurgence.
Sutter was right: the free lunch is over. Only it’s not performance we’ll have to pay for: it’s correctness.
This entry was posted on Tuesday, April 21st, 2009 at 4:53 am and is filed under Opinion, Technology. You can follow any responses to this entry through the RSS 2.0 feed. You can leave a response, or trackback from your own site.


April 21st, 2009 at 7:26 am
Well, web applications cannot take advantage of multicore in terms of concurrency and stability because they are not written to take advantage of the underlying OS. Almost all web apps are written to take advantage of what the Application server delivers in the way of resources.
There is also the script language or programming language short coming of being delivered through an interpreter or compiler that cannot do anything more than run a single process. Almost every attempt to get multi-threading to work with PHP, Java, Python and others has been an exercise in instability. This is why FastCGI is so popular nowadays and will become more popular.
You are right in saying multi-threaded programming is hard and requires a lot of planning and for thought before hand. So many will opt to use FastCGI and languages like Erlang where an application can be built using multiple single threads that can be distributed throughout a network and run as Light Weight Processes on those multi-core and SMP machines.
I have chosen to go the way of Erlang rather than FastCGI or learning to do LWP in ASP.NET. I think that Erlang has advantages over FastCGI in terms of administration, installation and room for improvement.
April 22nd, 2009 at 7:09 pm
Carl, you’re totally wrong about “instabilities”. At least for Java. Java servlets, EJBs are multitraded by nature. They’re very stable and have impressive performance. You’re wrong, that JVM threads executes only in one physical thread – JVM effectively utilizes multiple hardware cores (especially for SPARC processors).
Main problem in multi-threading – how to turn linear algorithm into parallel one to effectively use multiple cores.
April 23rd, 2009 at 4:23 pm
I beg to differ. Concurrency and multithreading go hand in hand with scalability. I will even go so far as to say distributed concurrency, multiple jvms are still a problem with Java. The use of semaphores for thread communication is always a point of instability because not every programmer gets it right every time.
Also since we are mostly in a place where web applications are the reason for the need of more porcessing power and speed Apache becomes one of the weak links. This is why I throw in Java with things like mod_perl, mod_python and the rest. They are one of the reasons that PHP is so popular with web hosts and why the cost of shared hosting for them is on average higher. Their usage requires more software and more administration to make sure things don’t go horribly wrong.
There are many technologies that are multi-threaded “by nature” but few that are multithreaded by design. It is this difference that make is hard to do multi-threaded programming and to use multi-threading as a solution to scalability. The same is true for concurrency.
April 23rd, 2009 at 4:51 pm
This will not happen. Open source Web applications are dominating the web and they are written using scripting languages like PHP, Ruby(with Rails) and ASP.NET. None of these will naturally scale without proding by the programmer. This requires thinking ahead to concurrency and scalablility but since the greater percentage of these apps are organically produced is not done.
April 23rd, 2009 at 6:47 pm
Hi Carl,
You make some fair points about practical scalability issues, and of course I agree with you on the points about multithreaded programming complexity in mainstream languages today. However, on the scaling side, I think you are getting sidetracked from the point I am trying to make. What I am looking at is how multicore will change how people write programs. The common contention is that we’ll all have to start looking for ways to parallelise our sequential code to take advantage of multiple cores.
Multi-user web applications, however, are parallel by nature. Multicore doesn’t completely change the game for these apps. The problems you talk about existed before the rise of multicore, as the largest web applications already used multi-CPU machines and many of them.