Tuesday, May 31, 2011

The positivity pipe dream fulfilled?

This post is about my recently accepted SINUM paper. This is an attempt to provide a broad context for the paper and related work in a less formal way, which couldn't be included in a journal publication.

In 1979, Bolley and Crouzeix published a fairly astonishing result (in French here). Namely, they showed that, for linear differential equations whose solution is always positive, it is impossible to design numerical methods that always yield a positive solution under arbitrarily large step sizes, even if one considers the very broad class of ODE solvers known as general linear methods. The only exception to this statement is the backward Euler method, which has many nice properties but is, unfortunately, too inaccurate for most applications.

This result stands in stark contrast to corresponding results on, say, stability in inner-product norms, where use of implicit methods can get you unconditional stability even for nonlinear problems.

I'm speaking roughly here, and won't attempt to be more precise (go read the paper if you want details). But the motivation cited by Bolley and Crouzeix for looking at this question was the idea that one might be able to take large time steps and maintain positivity in the solution of hyperbolic problems. Unfortunately, their result showed that this was not possible unless one was willing to settle for first-order accuracy.

What this result didn't indicate is how large the positivity-preserving step size could be for an implicit method. This question was partially answered within a decade, by Lenferink and by van de Griend & Kraaijevanger, for linear multistep methods and for Runge-Kutta methods, respectively. In both cases they found that the largest positivity-preserving step size was no more than two times the step size allowed by using the forward Euler method.

Of course, an implicit solve costs significantly more than an explicit solve, so gaining only a factor of two in the step size just isn't worth it. I don't know of any more work that was done on this problem after 1991 until fairly recently, in two of my own papers. The results there are consistent with the apparent barrier of a 2X step size, although see my short note at the end.

In one attempt to circumvent this bound, Gottlieb, Macdonald & Ruuth investigated the class of diagonally split Runge-Kutta methods (which aren't general linear methods, so escape the implications of Bolley & Crouzeix). They did find higher order methods with unconditional strong stability properties, but the accuracy of these methods invariably reduced to first order when used with large timesteps! It seemed that any effort to find unconditionally positive methods would be thwarted one way or another.

This sets the stage for my recently accepted SINUM paper. In this paper I found that by using both upwind-biased and downwind-biased discretizations (an idea that goes all the way back to Shu's original paper on "TVD time discretizations") in implicit Runge-Kutta methods, one can obtain second-order accurate methods that preserve positivity under arbitrarily large step sizes -- and they have this property even when applied to nonlinear problems. Remarkably, the methods appear to give quite accurate results when applied to problems with shocks.

It seems that we have gotten around the 2X barrier at last! But important issues remain, most notably the efficient implementation of these "implicit downwind Runge-Kutta schemes" in combination with high order hyperbolic PDE discretizations (like WENO).

I must reiterate that I've glossed over several important technical points here. For more details, go read the paper.

Note 1: In my Ph.D. thesis, I did find a method that breaks the 2X barrier, but only just barely and only for linear problems.

Note 2: I've referred rather carelessly in this post to literature on positivity preservation, contractivity, and strong stability preservation (monotonicity) without distinguishing between the three, simply because the conditions on the method turn out to be the same. The articles mentioned generally focus on one property or the other, but the results almost always carry over to all three.

Sunday, May 29, 2011

How do shock waves behave in inhomogeneous materials?

It's well known that solutions of genuinely nonlinear hyperbolic PDEs lead to shock singularities in finite time, under very weak assumptions on the initial data. However, proofs of this statement invariably assume uniformity of the PDE coefficients in space and time. What if the coefficients are allowed to vary, as would be the case for waves in many real materials, whose properties may be random or periodic?

Surprisingly little is known about the answer to this question, but a first attempt to answer it in part is made in my recently submitted manuscript "Shock dynamics in layered periodic media". Among the "shocking" findings:

-For certain media and relatively general initial conditions, shock formation seems not to occur even after extremely long times.
-Shocks that would be stable in a homogeneous medium are frequently not stable in a heterogeneous medium
-The asymptotic behavior of solutions in heterogeneous media is generally different; rather than consisting of N-waves, the solutions may be composed of solitary waves, for instance.

To get an idea of what's going on, take a look at some movies showing animations of the remarkable behavior of the solution.

Tuesday, May 24, 2011

My favorite new tool: ack

If you're like me, about once a week you type something like

>> grep words

and then wait a few seconds before remembering that you really meant

>> grep words *

or perhaps

>> grep -r words *

If so, do yourself a favor and install ack right now. You'll be glad you did.

Note: ack tries to intelligently search only filetypes that it believes to be text. Unfortunately, its built-in list of extensions for such files is incomplete. For instance, it does not include .rst (ReStructured Text) files. To add more extensions, just create a file ".ackrc" in your home directory and put the following line in it:


Here TYPE is the name of the filetype (anything you want) and .ext is the extension of that filetype. For example:


(in the line above, "type" is preceded by two minus characters. Unfortunately, they look strange in my blog's font).

Monday, May 23, 2011

Why I won't be a Mendeley university advisor (for now)

Mendeley, my tool of choice for bibliographic reference management and sharing, just announced their "University Advisor" program. Basically, they're asking academics to advocate for them on campus in exchange for free premium accounts.

I already advocate for Mendeley unofficially, because I think it is useful and its usefulness will grow in proportion to the number of people who adopt it. But I'd rather not be officially associated with Mendeley, mostly because it's not yet a sufficiently polished product.

Mendeley is a useful product with a lot of potential, but it still has a lot of serious problems. Some of these seem like telltale signs that the basic software and data infrastructure on which Mendeley is built has fundamental and dangerous flaws.

For instance, my Mendeley collection has 516 documents according to both the Desktop app and the web app. But the "my library stats" page says I have only 328 articles. This doesn't cause any real problems, but the idea that somehow a count of these items is being maintained in a way that it can get permanently out of sync is disturbing. Furthermore, I had an e-mail exchange with Mendeley support about this a couple of months ago, and they admitted it was a problem but they've been unable to solve it.

Duplicates. This is a huge problem with Mendeley. If I import a document twice, even from the same source, I get duplicates. If I drag a document from my library to a "group" twice, I get duplicates. The latter behavior is really inexcusable, since this operation occurs entirely within Mendeley. It also makes it very painful to use groups (so painful that I've stopped using them). I have a group called "Runge-Kutta stability regions", and I'd like to keep all papers in my library with the tag of the same name in that group. This would be easy if I could just periodically select the tag and drag all papers to the group, but that's a recipe for disaster since I'll end up with duplicates. CiteULike sync is also a disaster, as it generates many duplicates.

Bibtex. Mendeley just doesn't seem to pay enough attention to bibtex-centric user needs. In the desktop app, citation keys are not shown by default. When one right-clicks on a document and selects "copy citation" (with citation style set to bibtex), the citation generated is different than what one gets from "export citation" (the cite keys don't agree). Mendeley mangles bibtex fields on import. When articles are removed from Mendeley, they may persist in the auto-synced bibtex file. Combined with the duplicates problem, this can be a nightmare.

Sharing. Mendeley is supposed to facilitate sharing, but unfortunately it also restricts sharing in one very important way: my library is not publicly accessible, and I cannot make it so. Why not? Do people have something to hide in their library of scholarly references?

I could go on, but you get the idea. Mendeley has a lot of things going for it, not the least of which is a very active development team, and I hope it succeeds in achieving what its developers intend. It's my tool of choice, but as its name indicates, it's still beta.

Thursday, May 19, 2011

What is science?

Today I received an e-mail from a collaborator of mine stating

...our community doesn't reward engineering effort but instead scientific advances.

The context was a discussion of what is publishable in scientific software development. This got me thinking about what really is the difference between science and engineering, and what is the difference between publishable advances and the rest of the work that scientists do.

After some thought I've concluded that this difference is in many respects artificial and purely a function of one's discipline (or even sub-discipline). To take algorithmic complexity as an example:

-To a complexity theorist, a reduction in complexity from 1000N^3 to 3N^2 would be considered "engineering effort": after all, either one is in P, right?

-To an applied mathematician, the above result would be considered a fabulous scientific advance, but a further reduction from 3N^2 to 2N^2 might not be deemed publishable.

-To a chemist, say, who needs to run the algorithm with 10000 different parameter sets, even a 10% improvement might be considered a valuable scientific advance.

Of course, to a pure mathematician any algorithm for solving the problem is an engineering detail; the only scientific aspect is proving the existence of a unique solution.

All of these advances may be important steps leading from an abstract idea to a technological breakthrough that benefits non-scientists. I think it's okay that all the above researchers are only interested in one particular step in this chain of advances. What's detrimental, however, is the inability to recognize that the other steps in this chain are valuable. This kind of bigotry is, in my experience, rather common among scientists.

Edit: Ironically, the quote at the beginning came from a person whose title is "Professor of Engineering".