Parallelizing people

Developing parallel computer algorithms is becoming more important as CPU architectures jettison clock speed in favor of multicore designs.[1] Indeed, many scientific applications are moving beyond CPUs to graphical processing units (GPUs), which are composed of thousands of individual cores that can complete certain tasks orders of magnitude faster than CPUs. The concept of GPU computing was visually demonstrated by the Mythbusters, who unveiled a massively parallel paintball bazooka (with 1100 individual barrels standing in for a GPU’s many computing cores) that physically produced a painting of the Mona Lisa in 80 milliseconds.[2]

What about human parallelization? In software development, it’s certainly possible to achieve spectacular gains in productivity by progressing from a single coffee-addled, “overclocked” hacker to a large and distributed team. For example, few would have predicted that a complex, tightly integrated, and extremely stable operating system could be produced by a team of volunteers working in their spare time.[3] Yet, Linux is just that and somehow manages to be competitive with billion dollar efforts in the commercial sector![4]

How can open source and parallel software development work so well? Karl Fogel offers one clue in his book, Producing Open Source Software:[5] namely, that debugging works remarkably well in parallel. Not only do more contributors add more “eyes” on the code, but large and diverse teams are more likely to contain someone that has the precise background needed to identify a subtle bug. Such an effect can be seen outside of the software world as well; for example, the internet community recently decoded a cryptic, decades-old letter from a mother to her grown children in very little time.[6] The key to success? Some of the amateur sleuths were intimately familiar with common Christian prayers that were the focus of her letter. I’ve had similar luck when posting a tricky probability question to the Math Exchange web site; the problem was non-trivial to two esteemed mathematician colleagues but was expertly answered several times over (including formulas!) in under an hour by internet citizens more familiar with that type of problem. The first correct response arrived within 15 minutes.

Unfortunately, examples of such community-driven development are rare in the field of materials science (especially in the United States). For example, none of the current electronic structure databases fit the bill. Perhaps this is because such efforts are still young and building momentum. However, it is also possible that scientists underestimate the difficulty of building software with a healthy community of developers. Adding to the difficulty are challenges particular to the scientific realm.

The following are some early experiences from building software for the Materials Project.

Work with computer scientists (but don’t expect them to solve all your computer science problems)

Software development is a fast-moving field, and computer scientists can provide crucial guidance on modern software technologies and development practices. Yet, at the same time, the bulk of actual programming tasks will most likely involve gluing together software design principles with materials science applications. The most effective “bonders” are those who hybridize themselves between materials science and computer science. Trying to divide programming work into materials science problems and computer science problems leads to weak bonds that are more susceptible to communication overhead and more likely to impede progress. Therefore, the Materials Project generally hires research postdocs that are also competent programmers. One strategy that is surprisingly effective in this effort is to employ a basic programming assignment to assess core competency and motivation before the phone interview stage.[7]

Structure code for compartmentalized development (then work a lot to help people anyway)

Getting the community to adopt your project is quite difficult (my own software project, FireWorks, has certainly not gotten there yet!). One thing that is for certain is that functional, useful code combined with an open license doesn’t equal a community driven project. Contrary to potential fears of code theft or receiving harsh criticism upon going open-source (or dreams of immediate fame and thank-you letters), the most likely scenario of sharing code is that the world will not give much notice.

The usual tips involve writing code that at least has the potential for distributed development, e.g., by writing modular code and spending time on documentation.[5] But following the advice is more difficult when working with scientific collaborators. For example, modular software that computer scientists can read easily is often impenetrable to materials scientists, simply because the latter are often not familiar with programming abstractions such as object-oriented programming that are meant to facilitate scalability and productivity. And because a codebase itself is generally not seen as the final output (scientists are hired and promoted based on scientific results and journal articles, not Github contributions), scientists can be poorly motivated to work on documentation, code cleanup, or unit tests that could serve as a force multiplier for collaboration.

A public codebase is like a community mural that must recruit diverse volunteers and be able to extend and fix itself.
A public codebase is like a community mural that must recruit diverse volunteers and be able to extend and correct itself.

Unfortunately, perhaps the only way to write good code is to first write lots of bad code. This can lead to tension because senior developers can act like already-industrialized nations that expect newcomers to never “pollute” the codebase whilst themselves being guilty of producing poor code during their development.

One strategy to address mixed levels of programming proficiency is to entrust the more technical programmers with the overall code design and core library elements and to train newcomers by having them implement specific and limited components.[8] Surprisingly, the situation can end up not too different from that described in Wikipedia’s article on early Netherlandish painting:[9]

“…the master was responsible for the overall design of the painting, and typically painted the focal portions, such as the faces, hands and the embroidered parts of the figure’s clothing. The more prosaic elements would be left to assistants; in many works it is possible to discern abrupt shifts in style, with the relatively weak Deesis passage in van Eyck’s Crucifixion and Last Judgement diptych being a better-known example.”

If an in-house software collaboration is like a small artist studio, a public software project might be more like a large, ever-expanding community mural that must recruit and retain random volunteers of various skill levels. Somehow, the project must reach a point where the mural can largely extend itself in unexpected and powerful ways while still maintaining a consistent and uniform artistic vision. In particular, the project is truly healthy only when (a) it can quickly integrate new contributions and fixes to older sections and (b) one is confident that the painting would go on even if the lead artist departs.[5] These added considerations involve many human factors and can often be more difficult to achieve than producing good code.

We are often our own worst enemy

As scientists, we are often our own worst enemy in scaling software projects. Whereas computer scientists generally take pride in their “laziness” and happily reuse or adopt existing code (probably learning something in the process), regular scientists are by nature xenophobic and desire to write their own version of codes from scratch using programming techniques they are comfortable with. This often leads to myopic software and stagnation in programming paradigms. In particular, the programming model of “single-use script employs custom parser to read haphazard input file to produce custom output file” is severely outdated but extremely common in scientific codes.

Scientists can also be very protective of code, too heavily weighting the potential negative aspects of being open (e.g., unhelpful criticism, non-attribution) and inadequately weighting its benefits (bug fixes, enhancements, impact). In particular, even some supposedly “open” scientific software require explicitly requesting the code by personal email or come saddled with non-compete agreements. It is unclear what fear motivates the authors to put up barricades to programs that they ostensibly want to share. But it is doubtful that Linux – despite its brilliant kernel – would have ever seen such success if all users and collaborators were required to first write Linus Torvalds an email and agree to never work on a competing operating system.

As the years pass, what might distinguish one electronic structure database effort from another might not be the number of compounds or its initial software infrastructure but rather how successfully it leverages the community to scale beyond itself. It will most likely be a difficult exercise in human parallelization, but it can’t be more complicated than writing an operating system with a bunch of strangers – right?

Footnotes
[1] Some of the issues in parallel programming are summarized here.
[2] Here’s that Mythbusters video.
[3] The Cathedral and the Bazaar by Eric S. Raymond
[4] One (crude) estimate puts the cost of Windows Vista development at 10 billion dollars. The budget for Windows 8 advertising alone is estimated to be over a billion dollars.
[5] Producing Open Source Software by Karl Fogel.
[6] The internet quickly decodes a decades-long family mystery.
[7] The usefulness of the programming challenge and other tips for hiring programmers are explained by Coding Horror.
[8] Another strategy is to employ multiple codebases that are cleanly separated in functionality but integrate and stack in a modular way. For example, the Materials Project completely separates the development of its workflow code from its materials science code. Such “separation of powers” can also accommodate different personalities by giving different members full ownership of one code, affording them the authority to resolve small and counterproductive arguments quickly (à la the benevolent dictator model of software management).
[9] Wikipedia article on Early Netherlandish painting. Incidentally, while Wikipedia is often criticized for being unreliable due to its crowdsourced nature, a Nature study found that the online material of Britannica was itself guilty of 2.92 mistakes per science article; Wikipedia was not so much worse with 3.86 mistakes per article. Another interpretation is that both these numbers are way too large!
Advertisements

Add to the discussion!

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s