[click here to zip down to the schedule of public lectures]

Happy 2010 (i.e., MMIX++) to all!

I'm pleased to announce the publication of Selected Papers on Design of Algorithms, which contains updated versions of more than two dozen papers that I've had lots of fun writing over the years. Be the first to find a mistake in this book!

Paperback previews of new material for The Art of Computer Programming, called ``fascicles,'' have been published occasionally since 2005. Late this year, a hardback Volume 4A will be cobbled together from the five booklets illustrated above, with amendments and corrections that have been suggested by hundreds of helpful readers who have been beta-testing the preliminary installments and providing great feedback.

Here are details of the paperbacks currently available:

Volume 4 Fascicle 0, Introduction to Combinatorial Algorithms
and Boolean Functions (2008), xii+216pp. ISBN 0-321-53496-4; fourth printing (August 2009).

Volume 4 Fascicle 1, Bitwise Tricks & Techniques; Binary
Decision Diagrams (2009), xiii+261pp. ISBN 0-321-58050-8; second printing (August 2009).

Volume 4 Fascicle 2, Generating All Tuples and Permutations
(2005), v+128pp. ISBN 0-201-85393-0; third printing (January 2010).

Volume 4 Fascicle 3, Generating All Combinations and Partitions
(2005), vi+150pp. ISBN 0-201-85394-9; third printing (May 2009).

Volume 4 Fascicle 4, Generating All Trees; History of
Combinatorial Generation (2006), vi+120pp. ISBN 0-321-33570-8; fourth printing (May 2009).

Booksellers offer a special deal where you get a nice discount if you purchase all five, "shrink-wrapped" together.

These fascicles include approximately 1500 exercises, together with answers for self-study. There are some 818 pages, not counting the front matter and indexes. Hundreds of useful facts appear that cannot be found in any other publications, as far as I know.

Naturally I'm anxious to get all these details right, because I've had thousands and thousands of opportunities to make mistakes (and I must admit that I didn't always get 100% on the exams that I took during my student days). Therefore I am happy to offer reward checks whenever somebody helps me pick out a few more nits.

Should you purchase the paperbacks now, given that the hardcover version is less than a year away? That depends on (1) how long you want to wait before learning some of this material---which I believe you'll find useful for the rest of your life, if you are a born computer programmer; and on (2) how much you'd like to help.

*Note to Computer Science professors:* I've heard that some college
courses for grad students and advanced undergrads are being based
on one or more of these fascicles. Please let me know about your
experiences, if you try such an experiment.

*Note to Internet friends:* I'm extremely grateful that hundreds of
you have taken time to read these drafts, and to detect and report
errors that you've found. Your comments have improved the material
enormously. But I must confess that I'm also disappointed to have had
absolutely no feedback so far on several of the exercises on which I
worked hardest when I was preparing this material. Could it be that
(1) you've said nothing about them because I somehow managed to get
the details perfect? Or is it that (2) you shy away from the more
difficult stuff, being unable to spend more than a few minutes on any
particular topic? Although I do not like to think that readers are
lazy, I fear that hypothesis (1) is far less likely than hypothesis
(2). I may have to remove material that nobody cares about.
But I still cling to a belief that these details are extremely instructive,
and I'm uncomfortable with the prospect of printing a hardcopy edition
with so many exercises unvetted.

Thus I would like to enter here a plea for some readers to tell me explicitly, ``Dear Don, I have read exercise N and its answer very carefully, and I believe that it is 100% correct,'' where N is one of the following:

- 7--97 (properties of graph sums and graph products)
- 7--137 and 138 (generalized toruses)
- 7.1.1--122 (simple examples of regular functions that aren't threshold functions)
- 7.1.2--16 (minimum-memory computation of 7-variable functions)
- 7.1.2--43 (efficient finite state transduction with small depth)
- 7.1.2--63 (upper bound for functions with lots of don't-cares)
- 7.1.2--67 (design of a tic-tac-toe chip)
- 7.1.2--68 (analysis of a functional decomposition algorithm)
- 7.1.2--76 (Uhlig's cloning algorithm, computes twice as fast as expected)
- 7.1.2--85 and 86 (introduction to Razborov's monotone lower bounds)
- 7.1.3--124 (lower bounds on a basic RAM)
- 7.1.3--158 and 159 (Fibonacci and negaFibonacci codeword manipulation)
- 7.1.3--179 (online algorithm for components of a bitmap)
- 7.1.4--107 (testing whether a BDD defines a 2SAT instance)
- 7.1.4--110 (explicit functions of $n$ variables whose BDD is largest)
- 7.1.4--124 (computing the BDD size for the hidden weighted bit function, given any permutation of the variables)
- 7.1.4--226 (a new way to generate all simple cycles of a graph)
- 7.1.4--259 (generating all set partitions with ZDDs)
- 7.2.1.1--97 (analysis of a coroutine-based algorithm)
- 7.2.1.1--99 (decoding a recursively generated de Bruijn cycle)
- 7.2.1.3--25 (proof of van Zanten's theorem)
- 7.2.1.3--33 (enumeration of genlex listings for index lists)
- 7.2.1.3--42 (analysis of an algorithm for near-perfect combination generation)
- 7.2.1.3--63 (minimal-change algorithm for contingency tables)
- 7.2.1.3--100 (the heaviest multicomplexes)
- 7.2.1.4--28 (Lehmer's fabulous formulas, to be checked empirically)
- 7.2.1.5--28 and 29 (theory of generalized rook polynomials)
- 7.2.1.6--19 (proof that Skarbek's algorithm is dual to lexicographic generation)
- 7.2.1.6--27, 28 (basic facts about important lattices of trees)
- 7.2.1.6--33 (representing binary tree links with a single permutation)
- 7.2.1.6--37 (analysis of the Zaks--Richards algorithm for trees by degrees)
- 7.2.1.6--99 (quick Gray code to list spanning trees of a series-parallel graph)

Remember that you don't have to work the exercise first; you're allowed and
even encouraged to peek at the answer.
Please send success reports to the usual address for bug reports
(`taocp@cs.stanford.edu`),
if you have time to provide this extra help. Thanks in advance!

... in a video from 1959!

As an experiment, my Christmas Tree Lecture this year (see below) will
be webcast live and freely available for viewing anywhere in the world.
Stanford's
Stanford's Center for Professional
Development is calling it course GW017.
To try it, you have to get an account at my**stanford**connection;
but I've tried that and it seems relatively painless. Further
information is
here.

Although I must stay home most of the time and work on books that I've promised to complete, I do occasionally get into speaking mode. Here is a current schedule of events that have been planned for this year so far:

- Saturday, 16 January, 5:00pm to 5:20pm in room 3006 of Moscone Center West in San Francisco
- lecturing about "Importunate problems" as part of AMS Special Session on Permutations, II, at the Joint Mathematics Meetings
- Wednesday, 5 May (time and place to be announced), Carnegie Mellon University
- giving the Katayanagi Prize Lecture "ZDD structures and families of sets"
- Thursday 6 May at 4:30pm and Friday 7 May at 3:30pm (place to be announced), Carnegie Mellon University
- an informal minicourse about ZDDs, with audience participation encouraged
- Friday, 14 May, approximately 2:30pm to 3:30pm, Room 421 in the Glennan Building at Case Western Reserve University
- ``All Questions Answered,'' as part of the festivities for the 125th anniversary of the Case Alumni Association
- Wednesday, 30 June, 5:30pm, at the Sir Francis Drake Hotel in San Francisco
- ``An Earthshaking Announcement'' at TeX's 32nd Anniversary Celebration, on the final day of the TUG 2010 Conference (view video)
- Monday, 06 December, 5:30pm, in the new NVIDIA auditorium (Huang Engineering Center)
- A Computer Musing entitled ``Why pi?'' [the sixteenth annual Christmas Tree Lecture]

Click here for the ``recent news'' that was current at the end of 2009, if you're interested in old news as well as new news.