How to add 1100 columns in SQL Server
Question
I need 1100 columns in a single table.I already have 928 columns in same table, now I need to add new 150 columns.
I know SQL Server allow maximum 1024 columns.
Now I will not create new table. If I create new table and join my require table, I need to make changes in many places and this is not possible for me. So I want to add all columns in same table(require table).I know this is so tough to execute this table but presently I need to serve this requirement.
asked 2016-12-04 by Mahfuz Morshed
Answer
In essence, the question you are asking is “How do I create a table with more than 1024 columns?” The answer to this is simply, “There is no easy way to exceed that hard limit in SQL Server. You should split the data into multiple tables.”.
You could use sparse columns and wide tables. However, this comes with its own set of restrictions and limitations. For example, you cannot use compressed indexes with sparse columns. While using a wide table is possible, I would advise that you consider alternate methods.
Given that you must split the data into at least two tables, there are three aspects which I think you need to consider: Normalization, performance, and backwards compatibility.
Normalization
Having hundreds of columns on a single table is a sign that your data model is not well normalized. In your question, you’ve not specified what type of data is in this very wide table, but I’d anticipate that it would be best to break this data into smaller logical pieces, and break the table up into many tables. Without seeing the specific table definition, I can’t make any concrete suggestions, but based on experience this is surely the right move. In my experience, having a table with 1000 columns is a sign of a very bad database design.
For example, a Person
table should probably not include address information within the same table. Instead, this should be split into separate Person
and Address
tables. Similarly, the Address
table should contain only one row per address. The Person
table would join to Address
in a one-to-many relationship.
Performance
Having very wide tables will increase IO, exacerbating any disk IO bottlenecks. By having more data on a row, SQL Server stores fewer rows on each 8k page (in your scenario, likely only 1 row per page). Lower row-to-page density means that IO for even a simple task can be many times higher than it would be in a normalized scenario.
Additionally, if a row is wider than 8060 bytes, SQL Server will use row overflow data pages. This means that to read a single row, SQL Server can actually have to perform multiple reads for every row. Additional details on how SQL Server stores overflow data can be read here.
Backwards Compatibility
For backwards compatibility, you can create a very wide view that joins underlying tables. You can leverage INSTEAD OF
triggers on the view, which allow DML to be performed on the view and actually update the underlying base table instead. This is known as vertical partitioning a table.
Short term, this method allows you to put data in multiple physical tables, but to still be accessed via a single consolidated views. You can continue to make progress in splitting up your very wide table, and still have backwards compatibility via the view.
Long-term, it is likely best to update your code base to always access the underlying tables directly, and slowly reduce/eliminate the use of very wide view.
answered 2016-12-05 by Andy Mallon