[click here to zip down to the schedule of public events]

(I thank Yoshiyuki Kotani for this identity. We'll be able to use the same idea again in 2024 and 2027!)

Let's hope that the word ladder
`BOOST - BOAST - BEAST - LEAST - LEAPT - LEAPS - LEADS - LENDS -
LANDS - HANDS - HANDY - HARDY - HARPY - HAPPY`
will be appropriate for this year.

The sequence of Bernoulli numbers plays a prominent role in mathematics, especially in connection with asymptotic expansions related to Euler's summation formula. When I learned about this fascinating sequence, during my undergraduate days, I was taught that $B_1$ is equal to minus one-half. So I duly taught the same to my students, and went on to write books that explained what I thought Bernoulli and Euler had done.

But last year I took a close look at Peter Luschny's
Bernoulli
manifesto,
where he gives more than a dozen good reasons why the value of $B_1$
should really be *plus* one-half. He explains that some mathematicians
of the early 20th century had unilaterally
changed the conventions, because some of their
formulas came out a bit nicer when the negative value was used.
It was their well-intentioned but ultimately poor choice that had
led to what I'd been taught in the 1950s.

Luschny's webpage cites, for example, recent treatments of the subject by leading mathematicians such as Terence Tao. And his most compelling argument, from my personal perspective, is the way he unveils the early publications: I learned from him that my own presentation of the story, in The Art of Computer Programming and much more extensively in Concrete Mathematics, was a violation of history! I had put words and thoughts into Bernoulli and Euler's minds that were not theirs at all. This hurt, because I've always tried to present the evolution of ideas faithfully; in this case I'd fooled myself, by trying to conform what they wrote to what I'd learned.

By now, hundreds of books that use the “minus-one-half” convention have unfortunately been written. Even worse, all the major software systems for symbolic mathematics have that 20th-century aberration deeply embedded. Yet Luschny convinced me that we have all been wrong, and that it's high time to change back to the correct definition before the situation gets even worse.

Therefore I changed the definition of $B_1$ in all printings of The Art of Computer Programming during the latter half of 2021. And the new (34th) printing of Concrete Mathematics, released in January 2022, contains the much more extensive changes that are needed to tell a more comprehensive story.

These changes to Concrete Mathematics are too numerous
to incorporate into the
online errata. Therefore I've prepared
**replacement pages**
for anybody who wants to upgrade their copy of the second edition.

Speaking of unfortunate conventions in the math and CS literature, I've recently been surprised to learn that graph theory textbooks still have a very weak way to define the concept of “weak components” of a directed graph. They copy Harary's half-hearted notion, originally stated almost as an afterthought, by which a digraph's weak components are simply its ordinary components when the directions of oriented edges are ignored. Sometimes definitions are given just to fill up space instead of to fill a need!

A far better way to define weak components was introduced by Ron Graham,
Theodor Motzkin, and yours truly in the paper “Complements and
transitive closures,” Discrete Mathematics **2** (1972),
17--29; reprinted as Chapter 25 of
Selected Papers on Discrete Mathematics.
This definition wins big because it has a rich theory and many practical
applications.

One nice way to think of it is by reference to strong components: The strong components of a digraph give the weakest partition of the vertices so that, when each part is collapsed into a single vertex, we get a partial order. The weak components give the weakest partition so that, when each part is collapsed into a single vertex, we get a total order.

For example, this digraph has 15 vertices (black dots), 9 strong components (in ovals), and 4 weak components (separated by vertical lines).

Another nice way to think of it is by reference to ordinary connectivity:
A digraph is weakly connected if and only if cannot be divided into two
nonempty parts, L and R, such that (i) all vertices of R are reachable from
all vertices of L; but (ii) no vertices of L are reachable from any vertex
of R. Condition (ii) in an *undirected* graph is, of course,
ordinary connectivity.

Bob Tarjan has devised beautiful algorithms to find both strong and
weak components in linear time. I've recently written an exposition
of those algorithms (and depth-first search in general), intended for
eventual publication in Section 7.4.1.2 of
The Art of Computer Programming.
You can read it here:
`fasc12+.pdf`
(and there's a
reward of 0x$1.00 if you are the first to discover
an error). The theory of weak components is introduced on pages 11–14;
relevant exercises are on page 21; answers to those exercises are
on pages 31–33.

Let's all agree as soon as possible to use the easily understood term
*undirected components*,
or (as suggested by Doug West) *underlying components*,
for what many people have unfortunately
been calling weak components, and to celebrate the properties
of directed graphs whose weak components are defined in a truly useful way.

The fourth volume of The Art of Computer Programming deals with Combinatorial Algorithms, the area of computer science where good techniques have the most dramatic effects. I love it the most, because one good idea can often make a program run a million times faster. It's a huge, fascinating subject, and I published Part 1 (Volume 4A, 883 pages, now in its twenty-second printing) in 2011.

Ta da: My publishers told me at the end of September that Part 2 (732 pages) had just arrived at their warehouse. Shipments began in October and the book is already (November) in its second printing.

Full details, including a dozen reviews from people who have looked closely at previews of the content, appear on the publisher's website.

Preliminary versions of most of this material were published in 2015 and 2019 as Volume 4, Fascicle 6 and Volume 4, Fascicle 5. Errata to those paperbacks, as well as to all the hardcover volumes, can be found on the TAOCP home page.

I've spent considerable time, while preparing many of the new exercises, attempting to improve on expositions that I found in the literature. And in several noteworthy cases, nobody has yet pointed out any errors. It would be nice to believe that I actually got the details right on my first attempt. But that seems unlikely, because I had hundreds of chances to make mistakes. So I fear that the most probable hypothesis is that nobody has been sufficiently motivated to check the finer points out carefully as yet.

I still cling to a belief that such details are extremely instructive. Thus I would like to enter here a plea for some readers to tell me explicitly, “Dear Don, I have read exercise N and its answer very carefully, and I believe that it is 100% correct,” where N is one of the following exercises:

- MPR-28-29: Prove basic inequalities for sums of independent binary random variables
- MPR-50: Prove that Ross's conditional expectation inequality is sharper than the second moment inequality
- MPR-59: Derive the four functions theorem
- MPR-61: Show that independent binary random variables satisfy the FKG inequality
- MPR-99: Generalize the Karp–Upfal–Wigderson bound on expected loop iterations
- MPR-103-104: Study ternary “coupling from the past”
- MPR-121-122: Study the Kullback–Leibler divergence of one random variable from another
- MPR-127: Analyze the XOR of independent sparse binary vectors
- MPR-130-131: Derive paradoxical facts about the Cauchy distribution (which has “heavy tails”)
- 7.2.2-79: Analyze the sounds that are playable on the pipe organ in my home
- 7.2.2.1-29-30: Characterize all search trees that can arise with Algorithm X
- 7.2.2.1-55: Determine the fewest clues needed to force highly symmetric sudoku solutions
- 7.2.2.1-104: Construct infinitely many “perfect”
*n*-tone rows - 7.2.2.1-115: Find all hypersudoku solutions that are symmetric under transposition or under 90° rotation
- 7.2.2.1-121: Determine which of the 92 Wang tiles in exercise 2.3.4.3–5 can actually be used when tiling the whole plane
- 7.2.2.1-129: Enumerate all the symmetrical solutions to MacMahon's triangle-tiling problem
- 7.2.2.1-147: Construct all of the “bricks” that can be made with MacMahon's 30 six-colored cubes
- 7.2.2.1-151-152: Arrange all of the path dominoes into a single loop
- 7.2.2.1-172: Find the longest snake-in-the-box paths and cycles that can be made by kings, queens, rooks, bishops, or knights on a chessboard
- 7.2.2.1-189: Determine the asymptotic behavior of the Gould numbers
- 7.2.2.1-196: Analyze the running time of Algorithm X on bounded permutation problems
- 7.2.2.1-262: Study the ZDDs for domino and diamond tilings that tend to have large “frozen” regions
- 7.2.2.1-305-306: Find optimum arrangements of the windmill dominoes
- 7.2.2.1-309: Find all ways to make a convex shape from the twelve hexiamonds
- 7.2.2.1-320: Find all ways to make a convex shape from the fourteen tetraboloes
- 7.2.2.1-323: Find all ways to make a skewed rectangle from the ten tetraskews
- 7.2.2.1-327: Analyze the Somap graphs
- 7.2.2.1-334: Build fake solutions for Soma-cube shapes
- 7.2.2.1-337: Design a puzzle that makes several kinds of “dice” from the same bent tricubes
- 7.2.2.1-346: Pack space optimally with small tripods
- 7.2.2.1-375: Determine the smallest incomparable dissections of rectangles into rectangles
- 7.2.2.1-387: Classify the types of symmetry that a polycube might have
- 7.2.2.1-415: Make an exhaustive study of homogenous 5×5 slitherlink
- 7.2.2.1-432: Find the most interesting 3×3 kakuro puzzles
- 7.2.2.1-442: Enumerate all hitori covers of small grids
- 7.2.2.2-6: Verify a certain (previously unpublished) lower bound on van der Waerden numbers
*W*(3,*k*) - 7.2.2.2-57: Find a 6-gate way to match a certain 20-variable Boolean function at 32 given points
- 7.2.2.2-165: Devise an algorithm to compute the largest positive autarky of given clauses
- 7.2.2.2-212: Prove that partial latin square construction is NP-complete
- 7.2.2.2-282: Find a linear certificate of unsatisfiability for the flower snark clauses
- 7.2.2.2-306-308: Study the reluctant doubling strategy of Luby, Sinclair, and Zuckerman
- 7.2.2.2-318: Find the best possible Local Lemma for
*d*-regular dependency graphs with equal weights - 7.2.2.2-322: Show that random-walk methods cannot always find solutions of locally feasible problems using independent random variables
- 7.2.2.2-335: Express the Möbius series of a cocomparability graph as a determinant
- 7.2.2.2-339: Relate generating functions for traces to generating functions for pyramids
- 7.2.2.2-347: Find the best possible Local Lemma for a given chordal graph with arbitrary weights
- 7.2.2.2-356: Prove the Clique Local Lemma
- 7.2.2.2-363: Study the stable partial assignments of a satisfiability problem
- 7.2.2.2-442-444: Study the UC and PC hierarchy of progressively harder sets of clauses
- 7.2.2.2-518: Reduce 3SAT to testing the permanent of a {-1,0,1,2} matrix for zero

Please don't be alarmed by the highly technical nature of these examples;
more than 500 of the *other* exercises are *completely non-scary*,
indeed quite elementary. But of course I do want to go into high-level details also,
for the benefit of advanced readers; and those darker corners of my books
are naturally the most difficult to get right. Hence this plea for help.

Remember that you don't have to work the exercise first. You're allowed
to peek at the answer; in fact, you're even encouraged to do so.
Please send success reports to the usual address for bug reports
(`taocp@cs.stanford.edu`).
Thanks in advance!

By the way, if you want to receive a reward check for discovering an error in TAOCP, your best strategy may well be to scrutinize the answers to the exercises that are listed above.

Meanwhile I continue to work on Part 3 (Volume 4C), which already has many exciting topics of its own. Those sections are still in very preliminary form, but courageous readers who have nothing better to do might dare to take a peek at the comparatively raw copy in these “prefascicles.” One can look, for instance, at Pre-Fascicle 8a (Hamiltonian Paths and Cycles); Pre-Fascicle 9b (A Potpourri of Puzzles). Thanks to Tom Rokicki, those PostScript files are now searchable!

I seem to get older every day, and people keep asking me to reminisce about the glorious days of yore. If you're interested in checking out some of those videos and other archives, take a look at 2020's news page.

While preparing to commemorate his father's centennial, Mark Trakhtenbrot recently found some photos of a unique event!

Stephen C Kleene, Donald E Knuth, and Boris Trakhtenbrot at a cotton commune near Urgench, Uzbekistan, 19 September 1979

Although I must stay home most of the time and work on yet more books that I've promised to complete, I do occasionally get into speaking mode.

- Sunday, January 9, at First Lutheran Church, Palo Alto, 9am
- leading an informal study of the Biblical book of Numbers
- Thursday, January 27, 5:00pm--5:48pm Eastern Time, in Doron Zeilberger's Rutgers Experimental Mathematics Seminar
- Speaking (via Zoom) about Tchoukaillon numbers (slides) (their archives) (watch video)
- Wednesday, August 3, at 9:00am IDT (Israeli Daylight Time), which equals 11:00pm PDT on Tuesday August 2 in California
- An invited talk All Questions Answered (via Zoom), as part of the CP 2022 Conference in Haifa (watch video)
- Thursday, September 29 at 3pm PDT, in room 384H of Stanford's math building
- Speaking about Sierpiński Simplex Graphs as part of Stanford's Combinatorics Seminar
- Wednesday, December 7 at 5pm PST, in the NVIDIA Auditorium (lower level, Huang Engineering Center)
- A Computer Musing entitled “Twintrees, Baxter Permutations, and Floorplans” (the 26th annual Christmas lecture, not counting pandemic years) (watch video)

Click here for the “recent news” that was current at the end of 2021, if you're interested in old news as well as new news.