[click here to zip down to the schedule of public events]

(I thank Yoshiyuki Kotani for this identity. We'll be able to use the same idea again in 2024 and 2027!)

Let's hope that the word ladder
`BOOST - BOAST - BEAST - LEAST - LEAPT - LEAPS - LEADS - LENDS -
LANDS - HANDS - HANDY - HARDY - HARPY - HAPPY`
will be appropriate for this year.

The sequence of Bernoulli numbers plays a prominent role in mathematics, especially in connection with asymptotic expansions related to Euler's summation formula. When I learned about this fascinating sequence, during my undergraduate days, I was taught that $B_1$ is equal to minus one-half. So I duly taught the same to my students, and went on to write books that explained what I thought Bernoulli and Euler had done.

But last year I took a close look at Peter Luschny's
Bernoulli
manifesto,
where he gives more than a dozen good reasons why the value of $B_1$
should really be *plus* one-half. He explains that some mathematicians
of the early 20th century had unilaterally
changed the conventions, because some of their
formulas came out a bit nicer when the negative value was used.
It was their well-intentioned but ultimately poor choice that had
led to what I'd been taught in the 1950s.

Luschny's webpage cites, for example, recent treatments of the subject by leading mathematicians such as Terence Tao. And his most compelling argument, from my personal perspective, is the way he unveils the early publications: I learned from him that my own presentation of the story, in The Art of Computer Programming and much more extensively in Concrete Mathematics, was a violation of history! I had put words and thoughts into Bernoulli and Euler's minds that were not theirs at all. This hurt, because I've always tried to present the evolution of ideas faithfully; in this case I'd fooled myself, by trying to conform what they wrote to what I'd learned.

By now, hundreds of books that use the “minus-one-half” convention have unfortunately been written. Even worse, all the major software systems for symbolic mathematics have that 20th-century aberration deeply embedded. Yet Luschny convinced me that we have all been wrong, and that it's high time to change back to the correct definition before the situation gets even worse.

Therefore I changed the definition of $B_1$ in all printings of The Art of Computer Programming during the latter half of 2021. And the new (34th) printing of Concrete Mathematics, released in January 2022, contains the much more extensive changes that are needed to tell a more comprehensive story.

These changes to Concrete Mathematics are too numerous
to incorporate into the
online errata. Therefore I've prepared
**replacement pages**
for anybody who wants to upgrade their copy of the second edition.

Speaking of unfortunate conventions in the math and CS literature, I've recently been surprised to learn that graph theory textbooks still have a very weak way to define the concept of “weak components” of a directed graph. They copy Harary's half-hearted notion, originally stated almost as an afterthought, by which a digraph's weak components are simply its ordinary components when the directions of oriented edges are ignored. Sometimes definitions are given just to fill up space instead of to fill a need!

A far better way to define weak components was introduced by Ron Graham,
Theodor Motzkin, and yours truly in the paper “Complements and
transitive closures,” Discrete Mathematics **2** (1972),
17--29; reprinted as Chapter 25 of
Selected Papers on Discrete Mathematics.
This definition wins big because it has a rich theory and many practical
applications.

One nice way to think of it is by reference to strong components: The strong components of a digraph give the weakest partition of the vertices so that, when each part is collapsed into a single vertex, we get a partial order. The weak components give the weakest partition so that, when each part is collapsed into a single vertex, we get a total order.

For example, this digraph has 15 vertices (black dots), 9 strong components (in ovals), and 4 weak components (separated by vertical lines).

Another nice way to think of it is by reference to ordinary connectivity:
A digraph is weakly connected if and only if cannot be divided into two
nonempty parts, L and R, such that (i) all vertices of R are reachable from
all vertices of L; but (ii) no vertices of L are reachable from any vertex
of R. Condition (ii) in an *undirected* graph is, of course,
ordinary connectivity.

Bob Tarjan has devised beautiful algorithms to find both strong and
weak components in linear time. I've recently written an exposition
of those algorithms (and depth-first search in general), intended for
eventual publication in Section 7.4.1.2 of
The Art of Computer Programming.
You can read it here:
`fasc12+.pdf`
(and there's a
reward of 0x$1.00 if you are the first to discover
an error). The theory of weak components is introduced on pages 11–14;
relevant exercises are on page 21; answers to those exercises are
on pages 31–33.

Let's all agree as soon as possible to use the easily understood term
*undirected components*,
or (as suggested by Doug West) *underlying components*,
for what many people have unfortunately
been calling weak components, and to celebrate the properties
of directed graphs whose weak components are defined in a truly useful way.

The fourth volume of The Art of Computer Programming deals with Combinatorial Algorithms, the area of computer science where good techniques have the most dramatic effects. (I love it the most, because one good idea can often make a program run a million times faster.) It's a huge, fascinating subject, and I published Part 1 (Volume 4A, 883 pages, now in its nineteenth printing) in 2011.

Two-thirds of Part 2 (Volume 4B) are now available in preliminary paperback form as Volume 4, Fascicle 5 (v4f5): “Mathematical Preliminaries Redux; Introduction to Backtracking; Dancing Links”; and Volume 4, Fascicle 6 (v4f6): “Satisfiability”. Here are excerpts from the hype on the back cover of v4f5 (384 pages):

This fascicle, brimming with lively examples, forms the first third of what will eventually become hardcover Volume 4B. It begins with a 27-page tutorial on the major advances in probabilistic methods that have been made during the past 50 years, since those theories are the key to so many modern algorithms. Then it introduces the fundamental principles of efficient backtrack programming, a family of techniques that have been a mainstay of combinatorial computing since the beginning. This introductory material is followed by an extensive exploration of important data structures whose links perform delightful dances.

That section unifies a vast number of combinatorial algorithms by showing that they are special cases of the general XCC problem --- “exact covering with colors.” The firstfruits of the author's decades-old experiments with XCC solving are presented here for the first time, with dozens of applications to a dazzling array of questions that arise in amazingly diverse contexts.

The utility of this approach is illustrated by showing how it resolves and extends a wide variety of fascinating puzzles, old and new. Puzzles provide a great vehicle for understanding basic combinatorial methods and fundamental notions of symmetry. The emphasis here is on how to create new puzzles, rather than how to solve them. A significant number of leading computer scientists and mathematicians have chosen their careers after being inspired by such intellectual challenges. More than 650 exercises are provided, arranged carefully for self-instruction, together with detailed answers---in fact, sometimes also with answers to the answers.

And here is the corresponding hype on the back cover of v4f6 (310 pages, currently in its sixth printing):

This fascicle, brimming with lively examples, introduces and surveys “Satisfiability,” one of the most fundamental problems in all of computer science: Given a Boolean function, can its variables be set to at least one pattern of 0s and 1 that will make the function true?

Satisfiability is far from an abstract exercise in understanding formal systems. Revolutionary methods for solving such problems emerged at the beginning of the twenty-first century, and they've led to game-changing applications in industry. These so-called “SAT solvers” can now routinely find solutions to practical problems that involve millions of variables and were thought until very recently to be hopelessly difficult.

Fascicle 6 presents full details of seven different SAT solvers, ranging from simple algorithms suitable for small problems to state-of-the-art algorithms of industrial strength. Many other significant topics also arise in the course of the discussion, such as bounded model checking, the theory of traces, Las Vegas algorithms, phase changes in random processes, the efficient encoding of problems into conjunctive normal form, and the exploitation of global and local symmetries. More than 500 exercises are provided, arranged carefully for self-instruction, together with detailed answers.

I worked particularly hard while preparing many of the new exercises, attempting to improve on expositions that I found in the literature; and in several noteworthy cases, nobody has yet pointed out any errors. It would be nice to believe that I actually got the details right in my first attempt. But that seems unlikely, because I had hundreds of chances to make mistakes. So I fear that the most probable hypothesis is that nobody has been sufficiently motivated to check these things out carefully as yet.

I still cling to a belief that these details are extremely instructive, and I'm uncomfortable with the prospect of printing a hardcopy edition with so many exercises unvetted. Thus I would like to enter here a plea for some readers to tell me explicitly, “Dear Don, I have read exercise N and its answer very carefully, and I believe that it is 100% correct,” where N is one of the following exercises in Volume 4 Fascicle 5:

- MPR-28-29: Prove basic inequalities for sums of independent binary random variables
- MPR-50: Prove that Ross's conditional expectation inequality is sharper than the second moment inequality
- MPR-59: Derive the four functions theorem
- MPR-61: Show that independent binary random variables satisfy the FKG inequality
- MPR-99: Generalize the Karp–Upfal–Wigderson bound on expected loop iterations
- MPR-103-104: Study ternary “coupling from the past”
- MPR-121-122: Study the Kullback–Leibler divergence of one random variable from another
- MPR-127: Analyze the XOR of independent sparse binary vectors
- MPR-130-131: Derive paradoxical facts about the Cauchy distribution (which has “heavy tails”)
- 7.2.2-79: Analyze the sounds that are playable on the pipe organ in my home
- 7.2.2.1-29-30: Characterize all search trees that can arise with Algorithm X
- 7.2.2.1-53: Find every 4-clue instance of shidoku (4×4 sudoku)
- 7.2.2.1-55: Determine the fewest clues needed to force highly symmetric sudoku solutions
- 7.2.2.1-103: List all of the 12-tone rows with the all-interval property, and study their symmetries
- 7.2.2.1-104: Construct infinitely many “perfect”
*n*-tone rows - 7.2.2.1-115: Find all hypersudoku solutions that are symmetric under transposition or under 90° rotation
- 7.2.2.1-121: Determine which of the 92 Wang tiles in exercise 2.3.4.3–5 can actually be used when tiling the whole plane
- 7.2.2.1-129: Enumerate all the symmetrical solutions to MacMahon's triangle-tiling problem
- 7.2.2.1-147: Construct all of the “bricks” that can be made with MacMahon's 30 six-colored cubes
- 7.2.2.1-151-152: Arrange all of the path dominoes into a single loop
- 7.2.2.1-172: Find the longest snake-in-the-box paths and cycles that can be made by kings, queens, rooks, bishops, or knights on a chessboard
- 7.2.2.1-189: Determine the asymptotic behavior of the Gould numbers
- 7.2.2.1-196: Analyze the running time of Algorithm X on bounded permutation problems
- 7.2.2.1-262: Study the ZDDs for domino and diamond tilings that tend to have large “frozen” regions
- 7.2.2.1-305-306: Find optimum arrangements of the windmill dominoes
- 7.2.2.1-309: Find all ways to make a convex shape from the twelve hexiamonds
- 7.2.2.1-320: Find all ways to make a convex shape from the fourteen tetraboloes
- 7.2.2.1-323: Find all ways to make a skewed rectangle from the ten tetraskews
- 7.2.2.1-327: Analyze the Somap graphs
- 7.2.2.1-334: Build fake solutions for Soma-cube shapes
- 7.2.2.1-337: Design a puzzle that makes several kinds of “dice” from the same bent tricubes
- 7.2.2.1-346: Pack space optimally with small tripods
- 7.2.2.1-375: Determine the smallest incomparable dissections of rectangles into rectangles
- 7.2.2.1-387: Classify the types of symmetry that a polycube might have
- 7.2.2.1-394: Prove that every futoshiki puzzle needs at least six clues
- 7.2.2.1-415: Make an exhaustive study of homogenous 5×5 slitherlink
- 7.2.2.1-424: Make an exhaustive study of 6×6 masyu
- 7.2.2.1-432: Find the most interesting 3×3 kakuro puzzles
- 7.2.2.1-442: Enumerate all hitori covers of small grids

Furthermore, I fondly hope that diligent readers will write and say “Dear Don, I have read exercise N and its answer very carefully, and I believe that it is 100% correct,” where N is one of the following exercises in Volume 4 Fascicle 6:

- 7.2.2.2-6: Verify a certain (previously unpublished) lower bound on van der Waerden numbers
*W*(3,*k*) - 7.2.2.2-57: Find a 6-gate way to match a certain 20-variable Boolean function at 32 given points
- 7.2.2.2-165: Devise an algorithm to compute the largest positive autarky of given clauses
- 7.2.2.2-177: Enumerate independent sets of flower snark edges
- 7.2.2.2-212: Prove that partial latin square construction is NP-complete
- 7.2.2.2-282: Find a linear certificate of unsatisfiability for the flower snark clauses
- 7.2.2.2-306-308: Study the reluctant doubling strategy of Luby, Sinclair, and Zuckerman
- 7.2.2.2-318: Find the best possible Local Lemma for
*d*-regular dependency graphs with equal weights - 7.2.2.2-322: Show that random-walk methods cannot always find solutions of locally feasible problems using independent random variables
- 7.2.2.2-335: Express the Möbius series of a cocomparability graph as a determinant
- 7.2.2.2-339: Relate generating functions for traces to generating functions for pyramids
- 7.2.2.2-347: Find the best possible Local Lemma for a given chordal graph with arbitrary weights
- 7.2.2.2-356: Prove the Clique Local Lemma
- 7.2.2.2-363: Study the stable partial assignments of a satisfiability problem
- 7.2.2.2-386: Prove that certain CDCL solvers will efficiently refute any clauses that have a short certificate of unsatisfiability
- 7.2.2.2-428: Show that Boolean functions don't always have forcing representations of polynomial size
- 7.2.2.2-442-444: Study the UC and PC hierarchy of progressively harder sets of clauses
- 7.2.2.2-518: Reduce 3SAT to testing the permanent of a {-1,0,1,2} matrix for zero

Please don't be alarmed by the highly technical nature of these examples;
more than 250 of the *other* exercises are *completely non-scary*,
indeed quite elementary. But of course I do want to go into high-level details also,
for the benefit of advanced readers; and those darker corners of my books
are naturally the most difficult to get right. Hence this plea for help.

Remember that you don't have to work the exercise first. You're allowed
to peek at the answer; in fact, you're even encouraged to do so.
Please send success reports to the usual address for bug reports
(`taocp@cs.stanford.edu`).
Thanks in advance!

By the way, if you want to receive a reward check for discovering an error in TAOCP, your best strategy may well be to scrutinize the answers to the exercises that are listed above.

Meanwhile I continue to work on the final third of Volume 4B, which already has many exciting topics of its own. Those sections are still in very preliminary form, but courageous readers who have nothing better to do might dare to take a peek at the comparatively raw copy in these “prefascicles.” One can look, for instance, at Pre-Fascicle 8a (Hamiltonian Paths and Cycles); Pre-Fascicle 9b (A Potpourri of Puzzles). Thanks to Tom Rokicki, those PostScript files are now searchable!

I seem to get older every day, and people keep asking me to reminisce about the glorious days of yore. If you're interested in checking out some of those videos and other archives, take a look at 2020's news page.

Although I must stay home most of the time and work on yet more books that I've promised to complete, I do occasionally get into speaking mode.

- Sunday, January 9, at First Lutheran Church, Palo Alto, 9am
- leading an informal study of the Biblical book of Numbers
- Thursday, January 27, 5:00pm--5:48pm Eastern Time, in Doron Zeilberger's Rutgers Experimental Mathematics Seminar
- Speaking (via Zoom) about Tchoukaillon numbers (slides) (their archives) (watch video)
- Wednesday, August 3, at 9:00am IDT (Israeli Daylight Time), which equals 11:00pm PDT on Tuesday August 2 in California
- An invited talk All Questions Answered (via Zoom), as part of the CP 2022 Conference in Haifa

Click here for the “recent news” that was current at the end of 2021, if you're interested in old news as well as new news.