[click here to zip down to the schedule of public events]
(I thank Yoshiyuki Kotani for this identity. We'll be able to use the same idea again in 2024 and 2027!)
Let's hope that the word ladder BOOST - BOAST - BEAST - LEAST - LEAPT - LEAPS - LEADS - LENDS - LANDS - HANDS - HANDY - HARDY - HARPY - HAPPY will be appropriate for this year.
The sequence of Bernoulli numbers plays a prominent role in mathematics, especially in connection with asymptotic expansions related to Euler's summation formula. When I learned about this fascinating sequence, during my undergraduate days, I was taught that $B_1$ is equal to minus one-half. So I duly taught the same to my students, and went on to write books that explained what I thought Bernoulli and Euler had done.
But last year I took a close look at Peter Luschny's Bernoulli manifesto, where he gives more than a dozen good reasons why the value of $B_1$ should really be plus one-half. He explains that some mathematicians of the early 20th century had unilaterally changed the conventions, because some of their formulas came out a bit nicer when the negative value was used. It was their well-intentioned but ultimately poor choice that had led to what I'd been taught in the 1950s.
Luschny's webpage cites, for example, recent treatments of the subject by leading mathematicians such as Terence Tao. And his most compelling argument, from my personal perspective, is the way he unveils the early publications: I learned from him that my own presentation of the story, in The Art of Computer Programming and much more extensively in Concrete Mathematics, was a violation of history! I had put words and thoughts into Bernoulli and Euler's minds that were not theirs at all. This hurt, because I've always tried to present the evolution of ideas faithfully; in this case I'd fooled myself, by trying to conform what they wrote to what I'd learned.
By now, hundreds of books that use the “minus-one-half” convention have unfortunately been written. Even worse, all the major software systems for symbolic mathematics have that 20th-century aberration deeply embedded. Yet Luschny convinced me that we have all been wrong, and that it's high time to change back to the correct definition before the situation gets even worse.
Therefore I changed the definition of $B_1$ in all printings of The Art of Computer Programming during the latter half of 2021. And the new (34th) printing of Concrete Mathematics, released in January 2022, contains the much more extensive changes that are needed to tell a more comprehensive story.
These changes to Concrete Mathematics are too numerous to incorporate into the online errata. Therefore I've prepared replacement pages for anybody who wants to upgrade their copy of the second edition.
Speaking of unfortunate conventions in the math and CS literature, I've recently been surprised to learn that graph theory textbooks still have a very weak way to define the concept of “weak components” of a directed graph. They copy Harary's half-hearted notion, originally stated almost as an afterthought, by which a digraph's weak components are simply its ordinary components when the directions of oriented edges are ignored. Sometimes definitions are given just to fill up space instead of to fill a need!
A far better way to define weak components was introduced by Ron Graham, Theodor Motzkin, and yours truly in the paper “Complements and transitive closures,” Discrete Mathematics 2 (1972), 17--29; reprinted as Chapter 25 of Selected Papers on Discrete Mathematics. This definition wins big because it has a rich theory and many practical applications.
One nice way to think of it is by reference to strong components: The strong components of a digraph give the weakest partition of the vertices so that, when each part is collapsed into a single vertex, we get a partial order. The weak components give the weakest partition so that, when each part is collapsed into a single vertex, we get a total order.
For example, this digraph has 15 vertices (black dots), 9 strong components (in ovals), and 4 weak components (separated by vertical lines).
Another nice way to think of it is by reference to ordinary connectivity: A digraph is weakly connected if and only if cannot be divided into two nonempty parts, L and R, such that (i) all vertices of R are reachable from all vertices of L; but (ii) no vertices of L are reachable from any vertex of R. Condition (ii) in an undirected graph is, of course, ordinary connectivity.
Bob Tarjan has devised beautiful algorithms to find both strong and weak components in linear time. I've recently written an exposition of those algorithms (and depth-first search in general), intended for eventual publication in Section 22.214.171.124 of The Art of Computer Programming. You can read it here: fasc12+.pdf (and there's a reward of 0x$1.00 if you are the first to discover an error). The theory of weak components is introduced on pages 11–14; relevant exercises are on page 21; answers to those exercises are on pages 31–33.
Let's all agree as soon as possible to use the easily understood term undirected components, or (as suggested by Doug West), underlying components, for what many people have unfortunately been calling weak components, and to celebrate the properties of directed graphs whose weak components are defined in a truly useful way.
The fourth volume of The Art of Computer Programming deals with Combinatorial Algorithms, the area of computer science where good techniques have the most dramatic effects. (I love it the most, because one good idea can often make a program run a million times faster.) It's a huge, fascinating subject, and I published Part 1 (Volume 4A, 883 pages, now in its nineteenth printing) in 2011.
Two-thirds of Part 2 (Volume 4B) are now available in preliminary paperback form as Volume 4, Fascicle 5 (v4f5): “Mathematical Preliminaries Redux; Introduction to Backtracking; Dancing Links”; and Volume 4, Fascicle 6 (v4f6): “Satisfiability”. Here are excerpts from the hype on the back cover of v4f5 (384 pages):
This fascicle, brimming with lively examples, forms the first third of what will eventually become hardcover Volume 4B. It begins with a 27-page tutorial on the major advances in probabilistic methods that have been made during the past 50 years, since those theories are the key to so many modern algorithms. Then it introduces the fundamental principles of efficient backtrack programming, a family of techniques that have been a mainstay of combinatorial computing since the beginning. This introductory material is followed by an extensive exploration of important data structures whose links perform delightful dances.
That section unifies a vast number of combinatorial algorithms by showing that they are special cases of the general XCC problem --- “exact covering with colors.” The firstfruits of the author's decades-old experiments with XCC solving are presented here for the first time, with dozens of applications to a dazzling array of questions that arise in amazingly diverse contexts.
The utility of this approach is illustrated by showing how it resolves and extends a wide variety of fascinating puzzles, old and new. Puzzles provide a great vehicle for understanding basic combinatorial methods and fundamental notions of symmetry. The emphasis here is on how to create new puzzles, rather than how to solve them. A significant number of leading computer scientists and mathematicians have chosen their careers after being inspired by such intellectual challenges. More than 650 exercises are provided, arranged carefully for self-instruction, together with detailed answers---in fact, sometimes also with answers to the answers.
And here is the corresponding hype on the back cover of v4f6 (310 pages, currently in its sixth printing):
This fascicle, brimming with lively examples, introduces and surveys “Satisfiability,” one of the most fundamental problems in all of computer science: Given a Boolean function, can its variables be set to at least one pattern of 0s and 1 that will make the function true?
Satisfiability is far from an abstract exercise in understanding formal systems. Revolutionary methods for solving such problems emerged at the beginning of the twenty-first century, and they've led to game-changing applications in industry. These so-called “SAT solvers” can now routinely find solutions to practical problems that involve millions of variables and were thought until very recently to be hopelessly difficult.
Fascicle 6 presents full details of seven different SAT solvers, ranging from simple algorithms suitable for small problems to state-of-the-art algorithms of industrial strength. Many other significant topics also arise in the course of the discussion, such as bounded model checking, the theory of traces, Las Vegas algorithms, phase changes in random processes, the efficient encoding of problems into conjunctive normal form, and the exploitation of global and local symmetries. More than 500 exercises are provided, arranged carefully for self-instruction, together with detailed answers.
I worked particularly hard while preparing many of the new exercises, attempting to improve on expositions that I found in the literature; and in several noteworthy cases, nobody has yet pointed out any errors. It would be nice to believe that I actually got the details right in my first attempt. But that seems unlikely, because I had hundreds of chances to make mistakes. So I fear that the most probable hypothesis is that nobody has been sufficiently motivated to check these things out carefully as yet.
I still cling to a belief that these details are extremely instructive, and I'm uncomfortable with the prospect of printing a hardcopy edition with so many exercises unvetted. Thus I would like to enter here a plea for some readers to tell me explicitly, “Dear Don, I have read exercise N and its answer very carefully, and I believe that it is 100% correct,” where N is one of the following exercises in Volume 4 Fascicle 5:
Furthermore, I fondly hope that diligent readers will write and say “Dear Don, I have read exercise N and its answer very carefully, and I believe that it is 100% correct,” where N is one of the following exercises in Volume 4 Fascicle 6:
Please don't be alarmed by the highly technical nature of these examples; more than 250 of the other exercises are completely non-scary, indeed quite elementary. But of course I do want to go into high-level details also, for the benefit of advanced readers; and those darker corners of my books are naturally the most difficult to get right. Hence this plea for help.
Remember that you don't have to work the exercise first. You're allowed to peek at the answer; in fact, you're even encouraged to do so. Please send success reports to the usual address for bug reports (firstname.lastname@example.org). Thanks in advance!
By the way, if you want to receive a reward check for discovering an error in TAOCP, your best strategy may well be to scrutinize the answers to the exercises that are listed above.
Meanwhile I continue to work on the final third of Volume 4B, which already has many exciting topics of its own. Those sections are still in very preliminary form, but courageous readers who have nothing better to do might dare to take a peek at the comparatively raw copy in these “prefascicles.” One can look, for instance, at Pre-Fascicle 8a (Hamiltonian Paths and Cycles); Pre-Fascicle 9b (A Potpourri of Puzzles). Thanks to Tom Rokicki, those PostScript files are now searchable!
I seem to get older every day, and people keep asking me to reminisce about the glorious days of yore. If you're interested in checking out some of those videos and other archives, take a look at 2020's news page.
Although I must stay home most of the time and work on yet more books that I've promised to complete, I do occasionally get into speaking mode.