I ran into a situation today that was particularly frustrating, but for an interesting reason.
PostgreSQL tables are hard-limited to a maximum of 1600
columns. The limit, as I found out after a bit of digging, is specified in the PostgreSQL source and accompanied by a short explanation:
The key limit on this value is that the size of the fixed overhead for a tuple, plus the size of the null-values bitmap (at 1 bit per column), plus MAXALIGN alignment, must fit into
t_hoff
which isuint8
. On most machines the upper limit without making t_hoff wider would be a little over 1700.
So, with a bit of leeway for practical considerations, the column limit is effectively set to 1600
, and would require a recompilation of the psql source to change it. Additionally, any change that would widen t_hoff
so that the limit could be increased would have a very severe impact on disk usage, and would most likely impact the query planner, which I assume takes into account the size of tuples and their headers when creating an execution plan.
But Why Should You Care?
While the table column limit is an interesting factoid, this does raise an obvious question: How did I run into this limit in the first place?
The answer is not as sacrilegious as many people assumed when I mentioned this on Twitter: I, thankfully, did not have a database table with a 1500+ columns that needed a few dozen more added to it.
In reality, a particular project of ours at Fictive Kin contains a relatively extensive test suite, including tests for each database migration to ensure that the schema manipulations produced the desired effect, and more importantly, that the application itself functions correctly with the new structure. For various operational reasons, our test suite does not delete and recreate the entire database for each run of the test suite.
As a result, every time I ran the full set of tests for the migrations, I was adding and then subsequently removing (schema downgrades are also tested) a variety of columns from tables in the test database.
Well, it turns out that deleted columns are not actually removed from disk:
The DROP COLUMN form does not physically remove the column, but simply makes it invisible to SQL operations. Subsequent insert and update operations in the table will store a null value for the column.
Which, upon a moment's reflection, makes sense. If the column was completely removed, every single tuple written to disk for that table would need to be rewritten. For large tables this is not a viable option, and thus the process of ignoring the deleted columns in subsequent operations and inserting null values for newly created rows is the best alternative, at least from a performance standpoint.
And so, we find out how I arrived at my aforementioned frustrating but interesting situation: it seems that I had run this particular test suite a few hundred times in the past couple of weeks (without recreating the database), and finally hit the 1600 column limit via the series of additions and deletions that the suite performs.
While this was odd, it wasn't the strangest error I've ever run into. I'll save that story for a later post, but I will leave you with this gem that Dan Kaminsky pointed out on Twitter as response to my discovery of the PostgreSQL table column limit.
Clearly the developers of PowerPoint decided to store the number of slides in a uint16
, which seems very reasonable; they had to pick something, and a uint8
would "only" allow 255 slides.
My heart goes out to the poor soul(s) who discovered what the PowerPoint upper limit on slides was, sans source code.