Now Reading
The Curse and Blessings of Dynamic SQL

The Curse and Blessings of Dynamic SQL

2023-05-08 02:49:13







The Curse and Blessings of Dynamic SQL


An SQL textual content by Erland Sommarskog, SQL Server MVP. Last revision: 2022-08-30.

Copyright applies to this textual content. See right here for font conventions used on this article.


Introduction

This can be a textual content about dynamic SQL. Utilized in the suitable place, dynamic SQL is an amazing asset, however dynamic SQL can be probably the most abused options in SQL Server. I steadily see posts in SQL boards that use dynamic SQL or ask for it, when there is no such thing as a have to, or the need to make use of dynamic SQL is because of an earlier mistake within the design. Very often these posts are from inexperienced SQL customers. It is very important perceive that dynamic SQL is a complicated characteristic, and ideally you shouldn’t begin utilizing dynamic SQL till you may have mastered writing static queries.

How a lot you need to use dynamic SQL is determined by your position. In case you are an software developer, you ought to be restrictive along with your use of dynamic SQL. Dynamic SQL actually has its place in software code, however, as we will see on this textual content, it additionally introduces complexity so there’s all motive to maintain again. Then again, if you’re a DBA, dynamic SQL is your greatest buddy, as a result of there are various DBA operations that may be automated with the assistance of dynamic SQL. (And concurrently you utilize dynamic SQL all day lengthy, you need to try to stop the devs from sticking dynamic SQL into the applying code!)

Some readers might ask: what’s dynamic SQL? Dynamic SQL is once you write SQL code right into a string variable after which execute the contents of that variable. This may be executed in shopper code or in a saved process. With a strict definition, static SQL can solely happen in saved procedures and SQL scripts, since SQL in shopper code is all the time embedded into string literals within the client-side language. Nevertheless, you possibly can argue {that a} shopper that solely has mounted SQL strings is utilizing static SQL, and one of many goals of the article is to reveal how shopper code ought to be written to realize this.

The primary chapter after this introduction offers the fundamental instructions to run dynamic SQL in a correct approach and discusses traps you possibly can encounter. This chapter additionally demonstrates how you need to submit SQL queries from shopper code in a correct approach. The subsequent chapter is an important one: it discusses SQL injection, a risk you could all the time defend your self in opposition to once you work with dynamic SQL, regardless of if you’re utilizing dynamic SQL in saved procedures or you might be writing shopper code. That is adopted by a brief chapter on the efficiency points of dynamic SQL and highlights just a few guidelines you need to comply with. This chapter, too, is largely relevant each to saved procedures and shopper code.

From this level, nevertheless, the article leaves the shopper code apart and discusses solely dynamic SQL in saved procedures and SQL scripts. The longest chapter within the article focuses on one of many large issues with dynamic SQL: if you do not have the self-discipline, code that generates dynamic SQL can simply turn into unreadable and troublesome to keep up. This chapter is a method information with suggestions that can assist you to jot down code that generates dynamic SQL in a greater approach. The penultimate chapter of this text discusses some good use circumstances for dynamic SQL. This chapter features a part on easy methods to run a dynamic pivot – one thing I see questions on every day in SQL boards. The final part covers conditions the place I usually see dynamic SQL being abused, and which frequently are on account of a nasty design alternative earlier within the course of.

This text is meant for a broad vary of SQL staff. Significantly the primary chapters are written with the inexperienced SQL developer in thoughts, and seasoned DBAs and lead programmers might discover the fabric overly primary. Nevertheless, you should still have an interest to learn this materials to get ammunition to maintain your devs in test in order that they do not abuse dynamic SQL. Skilled programmers might uncover issues within the style-guide chapter that you weren’t conscious of or had not considered. Conversely, the much less skilled programmers might discover that the latter a part of this chapter a little bit troublesome to digest, and you could want to drop out, and transfer on to the part on, say, dynamic pivot, if that is the issue you may have at hand.


Relevant SQL Variations and Demo Database

This text applies to all variations of SQL Server from SQL 2005 and up. There aren’t any main variations between completely different SQL variations as regards to dynamic SQL since this launch. There are just a few smaller enhancements which were added alongside the best way, and I’ll name them out as we arrive at them.

Nevertheless, this doesn’t imply that every one examples within the article will run on SQL 2005 as-is, however I’ve taken the freedom to embrace newer syntax the place this helps to make the code shorter or extra concise. These are small issues, and you may simply modify the code if you wish to run the examples on an older model of SQL Server. A few of these newer options seem solely in a spot or two, and for these I level out the variations as they seem. There are additionally just a few options that I exploit over and over, and which I checklist beneath and I typically do not point out additional.

  • CREATE OR ALTER. All through the article, I exploit CREATE OR ALTER PROCEDURE. This syntax was launched in SQL 2016 SP1. On earlier variations you’ll have to use CREATE the primary time after which change to ALTER if the identical process is modified.
  • IIF. IIF(situation, X, Y) is brief for CASE WHEN situation THEN X ELSE Y END. IIF was launched in SQL 2012, so with older variations you will have to interchange IIF with the corresponding CASE expression.
  • INSERT VALUES for a number of rows. This syntax was launched in SQL 2008, so in SQL 2005 you will have to rewrite this with a number of INSERT VALUES statements.
  • DECLARE with initialisation. This was added in SQL 2008, and with SQL 2005 you will have to separate this into DECLARE + SET.

The demo database for this text is NorthDynamic, which you’ll create by working the script NorthDynamic.sql. The database may be very small, lower than 10 MB. The script runs on all variations of SQL Server from SQL 2005 and up. Not all examples construct on this database; in these circumstances the article has CREATE TABLE statements the place wanted. I like to recommend that you just work in tempdb for these examples.

NorthDynamic is a barely modified model of Microsoft’s previous demo database Northwind, primarily based on the fictive firm Northwind Merchants which flourished between 1996 and 1998; this explains why all orders are from this timeframe. (In case you are accustomed to Northwind and marvel what the variations are: I’ve changed deprecated knowledge varieties in order that the database could be put in with any collation. I’ve additionally made some modifications to schema and knowledge for the good thing about some examples that I wished to incorporate. I gave the database a brand new title in case I would like one other variation of Northwind for one more article.)

How one can Run Dynamic SQL

On this chapter, we’ll have a look at easy methods to run dynamic SQL in a civilised approach. I wish to level out there’s nothing within the examples on this chapter that requires dynamic SQL, and from that viewpoint they’re dangerous examples, as a result of you shouldn’t use dynamic SQL when there is no such thing as a motive to. Nevertheless, I opted to do it this fashion, as a result of I wished to give attention to the necessities and maintain the examples easy. Precise use circumstances will come within the later chapters.

sp_executesql

sp_executesql is the system process that you just invoke to run dynamic SQL. A option to describe this process is that it creates a anonymous saved process which is saved to the plan cache (however to not disk) after which runs the process straight. On subsequent calls to this anonymous process, the cached plan is reused. At this level nevertheless, we is not going to hassle an excessive amount of in regards to the ramifications on the plan cache, however we’ll save that to the performance chapter. For the second, simply consider it as “creates a process and runs it straight”.

And this can be a crucial factor: a batch of dynamic SQL is similar to a saved process. About every part you’ll be able to to do in a saved process, you are able to do in a batch of dynamic SQL. (There may be one necessary distinction as regards to permissions that I’ll cowl within the subsequent part, and some extra very marginal objects that I’ll cowl on this textual content.) And possibly a very powerful of all: you should use parameters. That is one thing which may be very important that you need to make use of. The converse additionally applies: there is no such thing as a syntax that’s legitimate inside dynamic SQL, that’s which isn’t legitimate in a saved process. All that’s particular is that the dynamic SQL is saved in a string after which executed.

sp_executesql has two mounted parameters: @stmt and @params. @stmt is the textual content of the SQL code to execute. The truth that the title is singular is a little bit deceptive: the SQL code doesn’t must be a single assertion, however it may be a batch of any size with any variety of statements. As you would possibly guess, @stmt is a compulsory parameter. @params holds the parameter checklist for the code in @stmt, and the format of it’s precisely just like the format of the parameter checklist to a saved process. @params just isn’t necessary, however similar to most of your saved procedures take parameters, most of your batches of dynamic SQL do. It’s to the extent that each time you see a name to sp_executesql with out parameters when reviewing code, there’s all motive to test that the programmer just isn’t making the error of concatenating the parameter values into the string, which is a really dangerous factor to do. After @params comply with the precise values to go to the parameter checklist in @params in the identical method as you go parameters to an everyday saved process.

Right here is an instance script:

DECLARE @sql    nvarchar(MAX),  
        @params nvarchar(4000)

SELECT @sql = N'SELECT @cnt = COUNT(*) 
                FROM   dbo.Orders
                WHERE  OrderDate >= @date
                  AND  OrderDate < dateadd(MONTH, 1, @date)

                SELECT CompanyName
                FROM   dbo.Prospects
                WHERE  CustomerID = @custid'

SELECT @params = N'@date    date, 
                   @cnt     int OUTPUT, 
                   @custid  nchar(5) = N''ALFKI'''

DECLARE @cnt  int, 
        @when date="19980201"

EXEC sp_executesql @sql, @params, @when, @cnt OUTPUT
SELECT @cnt AS [Orders in Feb 1998]

EXEC sp_executesql @sql, @params, 
                   @date="19970801", @cnt = @cnt OUTPUT, @custid = 'VINET'
SELECT @cnt AS [Orders in July 1997]

The script begins with declaring two variables which I later go to sp_executesql. @sql is the variable I’ll go to the @stmt parameter; I exploit the title @sql as this appears extra pure to me. Observe that the datatype of the variable is nvarchar(MAX). It is best to all the time use this knowledge kind to your batches of dynamic SQL, with none exception. You need to use nvarchar, and can’t use varchar, for the easy motive that sp_executesql solely accepts nvarchar and barfs for those who try to make use of varchar. MAX, however, just isn’t a requirement for sp_executesql, however it serves to guard your sanity. That’s, for those who use a restricted kind like nvarchar(1000), there’s the danger that the batch you compose exceeds the size you specify. This results in silent truncation and errors that may drive you loopy as you are attempting to grasp what’s going on.

For @params, I exploit nvarchar(4000). As with @stmt, the info kind have to be nvarchar; varchar just isn’t accepted. As for the size of 4000 that’s primarily out of behavior. You would need to have very many parameters to replenish that restrict, however once more, there’s little motive to take the danger to make a guess at, say, 100, after which get errors as a result of it was too low.

Subsequent, I arrange my SQL batch by assigning it to @sql, and since it’s a static piece of code, there’s little to comment on it. Extra fascinating is the project to @params the place I arrange the parameter checklist. There are three parameters. The primary is a “plain” parameter, whereas the second is an OUTPUT parameter and the third has a default worth. Default values is nothing that’s generally used with dynamic SQL, however I included it right here to emphasize that the parameter checklist is precisely just like the parameter checklist to a saved process. Then again, utilizing OUTPUT parameters with dynamic SQL is quite common, because you usually wish to get scalar knowledge again out of your batch of dynamic SQL.

Earlier than I invoke my dynamic SQL, I declare two native variables. Let us take a look at the decision to sp_executesql once more:

EXEC sp_executesql @sql, @params, @when, @cnt OUTPUT

The primary parameter is @sql, that’s, my SQL textual content, and the second is @params, that’s, my parameter checklist. Subsequent comes @when, my native variable that I initiated to 1998-02-01. This worth is handed to the @date parameter within the SQL batch. Final comes @cnt which is to obtain the worth of the parameter @cnt within the dynamic SQL. Though they’ve the identical title, they’re actually completely different entities; extra about that later. Simply as once I name a saved process, I have to specify OUTPUT additionally for the precise parameter. I do not go a price for the parameter @custid, so the default for this parameter applies, and the second question returns the title Alfreds Futterkiste which is the complete title for the shopper with the ID ALFKI. (As for why buyer IDs in NorthDynamic are codes fairly than numeric IDs, I do not know of what went on at Northwind Merchants within the nineties, however it’s sort of cute.)

There’s a second name sp_executesql for a similar SQL batch and parameters:

EXEC sp_executesql @sql, @params, 
                   @date="20180101", @cnt = @cnt OUTPUT, @custid = 'VINET'

As you’ll be able to see, on this name I exploit named parameters fairly than positional parameters, and for the enter parameters @date and @custid I exploit literals for the precise parameters. This totally in alignment with calls to common saved procedures. Word notably the worth handed to the @custid parameter. This can be a varchar literal (there is no such thing as a N earlier than it), whereas @custid is nchar(5). That’s, once we come to this a part of the parameter checklist, we’re free to combine (var)char and n(var)char. The restriction that we should go nvarchar applies solely to the mounted parameters @stmt and @params.

To drive the purpose that this is rather like defining a saved process and working it on the identical time, right here is similar code logically, however with a “actual ” saved process:

CREATE OR ALTER PROCEDURE my_sp @date    date,
                                @cnt     int OUTPUT, 
                                @custid  nchar(5) = N'ALFKI' AS

   SELECT @cnt = COUNT(*) 
   FROM   dbo.Orders
   WHERE  OrderDate >= @date
     AND  OrderDate < dateadd(MONTH, 1, @date)

   SELECT CompanyName
   FROM   dbo.Prospects
   WHERE  CustomerID = @custid
go
DECLARE @cnt  int, 
        @when date="19980201"

EXEC my_sp @when, @cnt OUTPUT
SELECT @cnt AS [Orders in Feb 1998]

EXEC my_sp @date="19970701", @cnt = @cnt OUTPUT, @custid = 'VINET'
SELECT @cnt AS [Orders in July 1997]

Right here is yet one more instance to focus on one element:

DECLARE @sql nvarchar(MAX) = 'SELECT CompanyName
                              FROM   dbo.Prospects
                              WHERE  CustomerID = @custid'
EXEC sp_executesql @sql, N'@custid nchar(5)', 'VINET'

As you see, on this instance I haven’t got any @params variable, however I outline the parameter checklist straight within the name to sp_executesql. Observe the N previous this parameter worth; that is required to make it an nvarchar literal. I have a tendency to make use of this sample, if I’ve a brief parameter checklist, however the longer the checklist is, the extra possible that I’ll put the definition in a variable to make the code extra readable. It’s also potential to go a literal worth to the @stmt parameter, however for the reason that SQL batch is nearly usually constructed from completely different components, you’ll not often do that observe.

Traps and Pitfalls

You may have now learnt to make use of sp_executesql to run dynamic SQL. We are going to now have a look at some pitfalls that may shock you, in case your expectations will not be consistent with the precise performance.

Scope of Variables

Think about this pretty nonsensical saved process:

CREATE OR ALTER PROCEDURE mishap_sp AS
   DECLARE @tbl TABLE (a int NOT NULL)
   DECLARE @a int = 99

   EXEC sp_executesql N'SELECT a FROM @tbl SELECT @a'
go
EXEC mishap_sp

After we run this, we get two error messages:

Msg 1087, Stage 15, State 2, Line 1

Should declare the desk variable “@tbl”.

Msg 137, Stage 15, State 2, Line 1

Should declare the scalar variable “@a”.

In case you have been paying consideration, you already perceive why we get this error: sp_executesql invokes a anonymous saved process. That’s, the code

SELECT a FROM @tbl SELECT @a

just isn’t a part of the process mishap_sp however of an interior (and anonymous) saved process. As a consequence, the SELECT assertion can’t entry the variables in mishap_sp. In T‑SQL, variables are solely seen within the scope they’re declared; by no means in interior scopes.

Word: scope is a phrase generally utilized in programming to indicate the place variables (and different objects) are seen. In T‑SQL, a scope could be a saved process, set off, operate or just an SQL script. A batch of dynamic SQL can be a scope.

If you wish to go variables from the encircling process to the dynamic SQL, you could all the time go them as parameters. For desk variables, this requires that you’ve outlined a desk kind, as on this instance:

CREATE TYPE dbo.mytbltype AS TABLE (a int NOT NULL)
go
CREATE OR ALTER PROCEDURE hap_sp AS
   DECLARE @mytbl dbo.mytbltype,
           @a int = 99

   EXEC sp_executesql N'SELECT a FROM @tbl SELECT @a',
                      N'@tbl dbo.mytbltype READONLY, @a int', @mytbl, @a
go
EXEC hap_sp

Word: In case you are nonetheless on SQL 2005: table-valued parameters had been launched in SQL 2008.

What Can Be Parameters?

Some individuals would possibly strive:

EXEC sp_executesql N'SELECT * FROM @tblname', N'@tblname sysname', 'Staff'

That’s, they count on to have the ability to ship in a desk title as a parameter. (As I’ll talk about within the part Dynamic Table Names in Application Code within the final chapter, the urge to take action is usually on account of an earlier design mistake.) However the error message is:

Msg 1087, Stage 16, State 1, Line 1

Should declare the desk variable “@tblname”.

There may be nothing magic with dynamic SQL. You can’t put the title of a desk in a variable and use it in a FROM clause. Nor can you set the title of a column in a variable and have it interpreted as such. And the identical applies to virtually all object names, with one exception: the title of a process you utilize in an EXEC assertion. You will notice examples of this because the article strikes on.

Temp Tables and Dynamic SQL

In distinction to the table-variable instance earlier, this works:

CREATE OR ALTER PROCEDURE hap_sp AS
   CREATE TABLE #temp(b int NOT NULL)
   EXEC sp_executesql N'SELECT b FROM #temp'
go
EXEC hap_sp

It’s because a temp desk is seen in all interior scopes invoked by the module that created it.

Then again, this fails:

CREATE OR ALTER PROCEDURE mishap_sp AS
   EXEC sp_executesql N'SELECT * INTO #temp FROM dbo.Prospects'
   SELECT * FROM #temp
go
EXEC mishap_sp

The error message is:

Msg 208, Stage 16, State 0, Process mishap_sp, Line 3

Invalid object title ‘#temp’.

A neighborhood temp desk is routinely dropped when the scope the place it was created exits. That’s, on this instance #temp is dropped when the batch of dynamic SQL exits and due to this fact you can not entry it within the surrounding saved process. You must create the desk with CREATE TABLE earlier than you invoke your SQL batch. (“However I do not know the schema, as a result of it’s dynamic!” some reader might object. If that’s the case, you may have most likely made a design mistake. We are going to return to that within the final chapter of this text.)

Dynamic SQL and Features

You can’t use dynamic SQL in user-defined features, full cease. The reason being easy: a operate in SQL Server just isn’t permitted to alter database state, and clearly SQL Server can’t test beforehand what your dynamic SQL is as much as. It additionally follows from the rule that you just can’t name saved procedures in user-defined features, and as we have now learnt, a batch of dynamic SQL is a anonymous saved process.

Permissions

That is one factor you want to concentrate on, as a result of right here is one thing which is completely different from regular saved procedures. Think about this script the place we create a check consumer which we impersonate to check permissions:

CREATE OR ALTER PROCEDURE mishap_sp AS
   SELECT COUNT(*) AS custcnt FROM dbo.Prospects
   EXEC sp_executesql N'SELECT COUNT(*) AS empcnt FROM dbo.Staff'
go
CREATE USER testuser WITHOUT LOGIN
GRANT EXECUTE ON mishap_sp TO testuser
go
EXECUTE AS USER = 'testuser'
go
EXEC mishap_sp
go
REVERT

The output (with output set to textual content in SSMS):

custcnt

———–

91

 

(1 row affected)

 

Msg 229, Stage 14, State 5, Line 1

The SELECT permission was denied on the item ‘Staff’, database ‘NorthDynamic’, schema ‘dbo’.

When testuser runs mishap_sp, the SELECT in opposition to Prospects succeeds because of one thing often known as possession chaining. As a result of the process and the desk have the identical proprietor, SQL Server doesn’t test if testuser has permission on the Prospects desk. Nevertheless, the SELECT that’s within the batch of dynamic SQL fails. It’s because the batch of dynamic SQL just isn’t thought-about to have an proprietor. Alternatively, the proprietor is taken into account to be the present consumer, which is testuser. Whichever, possession chaining doesn’t apply, and due to this fact SQL Server checks whether or not testuser has SELECT permission on Staff and since we by no means granted any permission to testuser, this fails.

Thus, for those who think about using dynamic SQL, it’s good to speak along with your DBA or whomever is accountable for the safety within the database to confirm that that is acceptable. Perhaps the coverage is that customers ought to solely have permission to run saved procedures, however they shouldn’t be granted any direct entry to the tables. It’s nonetheless potential to make use of dynamic SQL with such a coverage in pressure, as a result of you’ll be able to bundle the permission with the saved process, for those who signal it with a certificates after which grant a consumer created from that certificates the permissions wanted. This can be a method that I describe in much more element in my article Packaging Permissions in Stored Procedures. Whereas this can be a completely potential and legitimate answer, it does improve the complexity of your system a little bit bit, and thus raises the hurdle for utilizing dynamic SQL.

You Are Hiding Your References

If you wish to know the place a sure desk is referenced, SQL Server presents a few methods to find out this. There are the catalog views sys.sql_dependencies and sys.sql_expression_dependencies in addition to the older system process sp_depends.You too can use View Dependencies from the context menu of a desk within the Object Explorer in SSMS; below the hood SSMS makes use of sys.sql_expression_dependencies. Nevertheless, none of those will present you any references made in dynamic SQL, since when SQL Server parses a saved process and saves the dependencies, the dynamic SQL is simply in a string literal which SQL Server has no motive to care about at that time.

Word: In my SQL Brief Story, Where Is that Table Used?, I current an answer which reads all SQL code in a database right into a full-text listed desk which lets you search the code utilizing full-text operators equivalent to CONTAINS and thereby you will discover references additionally in dynamic SQL.

Chintak Chhapia identified an issue of comparable nature: when you find yourself wanting into improve your SQL Server occasion to a more moderen model, you could wish to use the Improve Advisor to seek out if there are any compatibility points in your code or whether or not you might be utilizing any deprecated options. Nevertheless, the Improve Advisor is totally blind for what you do in dynamic SQL.

The RETURN assertion

Beside the variations with permissions, there are just a few extra small variations between dynamic SQL and saved procedures. None of them are very important, however right here is one which Jonathan Van Houtte bumped into. He wished to make use of a RETURN assertion with a selected return worth in his dynamic SQL, however that is solely permitted in common saved procedures. That’s, this fails:

EXEC sp_executesql N'IF @x = 0 RETURN 122', N'@x int', 0

The error message is:

Msg 178, Stage 15, State 1, Line 1

A RETURN assertion with a return worth can’t be used on this context.

You’ll be able to nonetheless use RETURN with out a return worth in dynamic SQL. That is authorized:

EXEC sp_executesql N'IF @x = 0 RETURN', N'@x int', 0

EXEC()

EXEC() is an alternate option to run dynamic SQL. It exists for historic causes, for extra, see the notice on the finish of this part. Whereas there’s by no means any compelling motive to ever use this way, there are many examples of it out within the wild, so it could be improper to be silent on it.

EXEC() doesn’t help parameters, so all values must be inlined. Right here is an instance, much like the one we checked out beforehand:

DECLARE @sql     varchar(MAX),
        @when    date="1998-02-01",
        @custid  nchar(5) = N'ALFKI',
        @empid   int = 3

SELECT @sql="SELECT COUNT(*) 
    FROM   dbo.Orders
    WHERE  OrderDate >= "'' + convert(char(8), @when, 112) + ''' 
      AND  OrderDate < ''' + convert(char(8), dateadd(MONTH, 1, @when), 112) + '''
      AND  EmployeeID = ' + convert(varchar(10), @empid) + '

    SELECT CompanyName
    FROM   dbo.Prospects
    WHERE  CustomerID = N''' + @custid + ''''

PRINT @sql
EXEC(@sql)

In the event you have a look at this code and evaluate it with the unique instance with sp_executesql, what do you suppose? Do not you get the impression that this appears to be like much more cumbersome? The logic of the queries is tougher to comply with when the SQL is damaged up by all these single quotes, plusses and calls to convert. The rationale for this cascade of single quotes is that to incorporate a single quote in a string literal, it’s good to double it, and usually the one quotes seem firstly or the top of a string fragment so the double single quote contained in the string is adopted by the one quote that terminates the string. (If that sentence received your head spinning, you kind of received my level. 🙂

With regards to the convert, date and time values represent a particular problem, since it’s good to choose an excellent format code, in order that the constructed date string just isn’t misinterpreted. 112 is an effective code for dates solely, 126 for values with each date and time.

It’s possible you’ll notice right here that @sql is said as varchar(MAX); in distinction to sp_executesql, EXEC() accepts each varchar and nvarchar.

As you’ll be able to see, I’ve added a diagnostic PRINT, so that you could see the precise SQL generated. That is the output:

SELECT COUNT(*) 
    FROM   dbo.Orders
    WHERE  OrderDate >= '19980201' 
      AND  OrderDate < '19980301'
      AND  EmployeeID = 3

    SELECT CompanyName
    FROM   dbo.Prospects
    WHERE  CustomerID = N'ALFKI'

That is one thing I’ll return to within the fashion information, however it’s value making the purpose already now: once you work with dynamic SQL and construct a question string, you all the time want a diagnostic PRINT, so that you could see what you may have generated in case you get an error message or an sudden consequence.

The enter to EXEC() doesn’t must be a single variable, however it may be a concatenation of easy phrases, that’s, variables and literals, as on this instance::

DECLARE @custid  nchar(5) = N'ALFKI'

EXEC('SELECT CompanyName
      FROM   dbo.Prospects
      WHERE  CustomerID = N''' + @custid + '''')

Features calls will not be permitted, so this fails with a syntax error:

DECLARE @when    date="1998-02-01",
        @empid   int = 3

 EXEC(
   'SELECT COUNT(*) 
    FROM   dbo.Orders
    WHERE  OrderDate >= ''' + convert(char(8), @when, 112) + ''' 
      AND  OrderDate < ''' + convert(char(8), dateadd(MONTH, 1, @when), 112) + '''
      AND  EmployeeID = ' + convert(varchar(10), @empid) + )  

In any case, there’s little motive to make use of this sample. If issues go improper, there is no such thing as a option to splice in a diagnostic PRINT, so it’s a lot better to retailer the SQL string in an @sql variable.

General, there’s little motive to make use of EXEC(), when sp_executesql lets you use parameters and thereby making the code much more readable. And, as we will see later on this article, readability and complexity is just one motive to steer clear of string concatenation. So I might make it quick and say Do not use EXEC(). Ever. However that will not be sincere, as a result of I have a tendency to make use of EXEC() myself when the SQL string takes no parameters. That’s, fairly than saying

EXEC sp_executesql @sql

I most frequently write the shorter:

EXEC(@sql)

You will notice this greater than as soon as on this article. However you possibly can actually argue that that is only a dangerous behavior of mine. In the event you resolve to all the time use sp_executesql, regardless of whether or not your dynamic SQL takes parameters or not, you wouldn’t be doing one thing improper.

Earlier than I shut this part, I ought to add that there’s really one factor you are able to do with EXEC() that you just can’t do with sp_executesql: you’ll be able to impersonate a consumer or a login once you run your SQL batch. So as an alternative of the impersonation we did above once we examined permissions for testuser, we might have executed:

EXEC ('EXEC mishap_sp') AS USER = 'testuser'

This has the benefit that you recognize for certain that you’ll revert from the impersonated context, even when there’s an execution error. However personally, I do not suppose this outweighs the elevated complexity that dynamic SQL incurs. In any case, this must be thought-about as a complicated characteristic.

Word: EXEC() might come to make use of on very previous variations of SQL Server. sp_executesql was launched in SQL 7, so if you end up on SQL 6.x, EXEC() is your sole possibility. You might also have to make use of EXEC() on SQL 7 and SQL 2000, in case your dynamic SQL can exceed 4000 characters, since nvarchar(MAX) just isn’t obtainable on these variations. EXEC() lets you work round this, since you’ll be able to say EXEC(@sql1 + @sql2 + @sql3) and it accepts that the ensuing string exceeds 8000 bytes.

EXEC() AT linked server

This can be a particular type of EXEC() which lets you run a batch of instructions on a linked server. This type of EXEC() lets you use parameters, if another way from sp_executesql.

To play with this, you’ll be able to arrange a linked server that goes again to your present occasion like this:

EXEC sp_addlinkedserver LOOPBACK, '', 'SQLNCLI', @datasrc = @@servername
EXEC sp_serveroption LOOPBACK, 'rpc out', 'true'

You’ll be able to substitute @@servername with the title of one other occasion you may have entry to for those who like. Simply just remember to have NorthDynamic on that occasion as effectively.

Right here is how you possibly can run the batch of dynamic SQL above:

DECLARE @sql     varchar(MAX),
        @when    date="19980201",
        @custid  nchar(5) = 'VINET',
        @cnt     int

SELECT @sql="SELECT ? = COUNT(*) 
               FROM   NorthDynamic.dbo.Orders
               WHERE  OrderDate >= ?
                 AND  OrderDate < dateadd(MONTH, 1, ?)

               SELECT CompanyName
               FROM   NorthDynamic.dbo.Prospects
               WHERE  CustomerID = ?"
PRINT @sql
EXEC(@sql, @cnt OUTPUT, @when, @when, @custid) AT LOOPBACK
SELECT @cnt AS [Orders in Feb 1998]

In distinction to sp_executesql, the parameters should not have names, however as an alternative the query mark serves as a parameter holder. SQL Server will go the parameters given to EXEC() AT within the order the query marks seem. That’s, @cnt goes to the primary query mark, and as you see we will use the OUTPUT key phrase right here as effectively. Subsequent @when comes twice within the parameter checklist, just because it’s used twice within the SQL code.

There’s a good motive why EXEC() AT doesn’t use parameter names like sp_executesql: the linked server could also be one thing else than SQL Server the place names beginning with @ would make little sense. The ? parameter marker however is a normal factor in ODBC and OLE DB, and the OLE DB supplier for the distant knowledge supply will change the ? into no matter is smart on the opposite knowledge supply.

For simplicity, I used EXEC() AT with a linked server that’s one other SQL Server occasion. Nevertheless, on this case it’s a lot simpler to make use of sp_executesql with four-part notation as on this instance:

DECLARE @sql    nvarchar(MAX),  
        @params nvarchar(4000)

SELECT @sql = N'SELECT @cnt = COUNT(*) 
                FROM   dbo.Orders
                WHERE  OrderDate >= @date
                  AND  OrderDate < dateadd(MONTH, 1, @date)

                SELECT CompanyName
                FROM   dbo.Prospects
                WHERE  CustomerID = @custid'

SELECT @params = N'@date    date,
                   @cnt     int OUTPUT, 
                   @custid  nchar(5) = N''ALFKI'''

DECLARE @cnt  int, 
        @when date="19980201"

EXEC LOOPBACK.NorthDynamic.sys.sp_executesql @sql, @params, @when, @cnt OUTPUT
SELECT @cnt AS [Orders in Feb 1998]

It’s possible you’ll notice that within the instance with EXEC() AT, I included the database title within the SQL string, however I might keep away from it within the model with sp_executesql. While you invoke sp_executesql (as some other system process) with a given database title, sp_executesql will run within the context of that database. Thus, there’s all motive to make use of this sample once you wish to run a parameterised SQL assertion on a distant SQL Server occasion. However when your linked server just isn’t working SQL Server, EXEC() AT is unquestionably your buddy.

Take into account that there are some restrictions in what knowledge varieties you should use on linked servers. As an example, you can not use the xml knowledge kind. You’ll be able to go LOB varieties (e.g. nvarchar(MAX)), however solely as enter parameters; you can not obtain them as OUTPUT parameters.

Working Dynamic SQL From Shopper Code

The distinction between saved procedures and shopper code on this regard is that in saved procedures you solely use dynamic SQL often, however in shopper code you possibly can say that you just do it on a regular basis for the reason that SQL code is all the time inside a string literal within the client-side language. Nevertheless, that doesn’t imply that you just all the time must use string concatenation to construct the string. Au contraire, that is an distinctive case. So long as you might be solely working with plain-vanilla queries in opposition to particular tables, your queries ought to be contained in a single string literal that holds the complete question and the place the enter values from the consumer are handed as parameters. Or put another way: regardless of whether or not you might be doing dynamic SQL code in a saved process or in shopper code, something that may be a variable as permitted by the SQL syntax, ought to be handed as parameter and never be inlined into the question string by way of concatenation. On this part we’ll discover ways to do that in .NET, and we will even begin wanting on the many issues with inlining parameter values.

Word: In case you are working in a distinct setting than .NET, you need to nonetheless learn this part. The sample I introduce right here applies to all shopper environments for SQL Server, though the title of the objects and the strategies are completely different. And whereas .NET helps named parameters within the @param fashion, some environments solely help parameters markers, usually ?, as we noticed examples of within the earlier part.

I also needs to level out that this part goals at shopper code the place you really write SQL code. In case you are utilizing an ORM such Entity Framework you could be utilizing a language like Linq that generates the SQL code. For causes irrelevant to this text, I’m not a buddy of ORMs, however at the least they maintain customers away from writing dangerous dynamic SQL, and I can’t cowl Linq and related on this article.

For this part, we is not going to use the tables in NorthDynamic, however as an alternative work with this desk:

CREATE TABLE typetbl (ident   int          NOT NULL IDENTITY,
                      intcol  int          NOT NULL,
                      deccol  decimal(8,2) NOT NULL,
                      fltcol  float        NOT NULL,
                      strcol  nvarchar(25) NOT NULL,
                      dtcol   datetime     NOT NULL,
                      CONSTRAINT pk_datatyptbl PRIMARY KEY (ident)
)

Whilst you might create it in NorthDynamic, I like to recommend that you just create the desk in tempdb, and that is what I’ll assume within the textual content that follows.

Say that we wish to insert a row into this desk and we have now the values in variables intval, decval and many others and we wish to get again the worth for ident for the inserted row. Right here is a technique to do that:

public static void InsertTbl () {

   utilizing(SqlConnection cn = new SqlConnection(strConn)) {
   utilizing(SqlCommand cmd = new SqlCommand()) {
      cmd.CommandType = CommandType.Textual content;

      cmd.CommandText =
           @"INSERT dbo.typetbl(intcol, strcol, fltcol, deccol, dtcol)
                 VALUES(@intval, @strval, @fltval, @decval, @dtval)
             SELECT @ident = scope_identity()";

      cmd.Parameters.Add("@intval", SqlDbType.Int).Worth = intval;
      cmd.Parameters.Add("@strval", SqlDbType.NVarChar, 25).Worth = strval;
      cmd.Parameters.Add("@fltval", SqlDbType.Float).Worth = fltval;
      cmd.Parameters.Add("@decval", SqlDbType.Decimal).Worth = decval;
      cmd.Parameters["@decval"].Precision = 8;
      cmd.Parameters["@decval"].Scale = 2;
      cmd.Parameters.Add("@dtval", SqlDbType.DateTime).Worth = dtval;
      cmd.Parameters.Add("@ident", SqlDbType.Int);
      cmd.Parameters["@ident"].Path = ParameterDirection.Output;

      cmd.Connection = cn;
      cn.Open();

      cmd.ExecuteNonQuery();

      int identval = Convert.ToInt32(cmd.Parameters["@ident"].Worth);
      Console.WriteLine("The inserted row has id " + identval.ToString());
   }}
}

We first create connection and command objects (inside utilizing clauses to be sure that assets are launched as quickly as they exit of scope) and set the command kind to be textual content. Subsequent, in adherence with what I mentioned to start with of this part, we outline the command textual content as a single static string which incorporates a few T‑SQL variables. Or extra exactly, they’re parameters.

Within the suite of statements that follows, we outline these parameters with the Add methodology of the SqlParameterCollection class. There are a few overloads of this methodology, of which the 2 mostly used seem on this instance. The primary parameter is the title of the parameter. (It seems that the @ could be ignored, however I like to recommend that you just all the time embrace it.) The second parameter is a price from the enumeration SqlDbType (which is within the System.Knowledge namespace). This enumeration principally has one entry for every knowledge kind in SQL Server, with the title being the identical because the SQL Server knowledge kind, however with preliminary uppercase and a few center uppercase as effectively to spice it up. There are just a few deviations with some lesser used varieties that I can’t cowl right here. For a whole checklist, see the.NET Framework reference. For varieties that go along with a size, that’s, string and binary varieties, we use an overload that takes a 3rd parameter which is the size. (For LOB varieties equivalent to nvarchar(MAX), you specify -1 for the size.) The instance features a decimal worth. There isn’t a overload to outline a decimal parameter with precision and scale, so we have to set these properties individually as seen within the code pattern above.

The Add methodology returns the newly created SqlParameter object which allows us to set the Worth property straight, and thus for all however @decval we will outline a parameter in a single line of code. I didn’t embrace this within the instance, however to explicitly go a NULL worth, set the Worth property to System.DBNull.Worth.

For @ident there is no such thing as a worth to set, since that is an output parameter, which we point out by setting the Path property. (This property defaults to ParameterDirection.Enter.)

As soon as the parameters have been outlined, we open the connection, run the SQL batch after which we retrieve the generated IDENTITY worth from the output parameter and print it out.

At this level you could wish to strive the code. The file clientcode.cs contains InsertTbl in addition to a way BadInsert that we are going to have a look at later. There may be additionally a Important that parses the command line and has a easy exception handler. Within the high of the file you see this:

//  Connection string, change server and database as wanted!
non-public static string
    strConn = @"Built-in Safety=SSPI;" +
              @"Knowledge Supply=(native);Preliminary Catalog=tempdb;";

// Constants for the demo.
non-public static int      intval = 123456;
non-public static string   strval = "alpha (α) and beta (β)";
non-public static double   fltval = 17.0/3.0;
non-public static decimal  decval = 456.76M;
non-public static DateTime dtval  = DateTime.Now;

Save and compile this system. (See here for those who want help on compilation.) Open a command immediate to maneuver to the folder the place you may have the executable and run it as:

clientcode

It is best to see an output like this:

The inserted row has id 1

You too can run a SELECT in opposition to typetbl to see that the values had been inserted.

Let’s now examine what occurs by way of T‑SQL. You should utilize Profiler to seize the command that’s despatched to SQL Server. (You too can use the XEvent Profiler in latest variations of SSMS). You will notice one thing like this: (I’ve reformatted the output for legibility):

declare @p8 int
set @p8=2
exec sp_executesql N'INSERT dbo.typetbl(intcol, strcol, fltcol, deccol, dtcol)
                 VALUES(@intval, @strval, @fltval, @decval, @dtval)
             SELECT @ident = scope_identity()',
             N'@intval int, @strval nvarchar(25), @fltval float,
               @decval decimal(8,2), @dtval datetime, @ident int output',
       @intval=123456, @strval=N'alpha (α) and beta (β)', @fltval=5.666666666666667,
       @decval=456.76, @dtval="2018-09-22 00:07:35.193", @ident=@p8 output
choose @p8

In order that was the guts of the matter! The @ parameters will not be simply an abstraction in .NET, however the batch is handed to sp_executesql precisely because it was entered and all of the parameters are a part of the parameter checklist in @params.

The total story is a little bit completely different, although. .NET doesn’t really compose the code above, however what you see is a textual illustration of what largely is a binary stream of information. What seems within the hint is an RPC occasion (RPC = Distant Process Name). That’s, .NET tells SQL Server to run a saved process, this time sp_executesql, with so and so many parameters, after which it sends the parameters in binary type. If you had been to snoop on the wire with Wireshark or related, you’d see the INSERT assertion, however not the encircling quotes. In the event you had been to have a look at the bytes previous the INSERT key phrase, you’d discover the size of the string. The identical goes for the parameters @params and @strval. You wouldn’t be capable to discern the values of @intval, @fltval, @decval and @dtval, until you might be accustomed to the binary format.

Word: The variable @p8 within the Hint output is a really humorous approach that Profiler makes use of to point the return worth of output parameters. In actuality, @p8 doesn’t exist in any respect. And notably, the worth 2 isn’t handed to the @ident parameter.

The reader might discover this to be a little bit an excessive amount of of nitty-gritty particulars to essentially catch your consideration, so let me summarise what are the details to this point:

  • While you outline a number of parameters to your SQL batch, this ends in a name to sp_executesql.
  • The parameter values are handed in a binary format with none SQL syntax round them. (Why this issues, will likely be obvious later.)

And most of all:

  • While you go values to an SQL batch out of your shopper code, be that .NET or anything, you need to all the time use parameterised statements.

The final level ought to actually be a superfluous one to make, as a result of what you may have seen is the norm that any skilled software program developer ought to comply with. However sadly, I see far too many examples in consumer boards and elsewhere which point out that this isn’t all the time the case. Subsequently, we have to have a look at what far too many individuals do, and why you shouldn’t do this. In clientcode.cs there’s additionally a way BadInsert which appears to be like like this:

public static void BadInsert () {

   utilizing(SqlConnection cn = new SqlConnection(strConn)) {
   utilizing(SqlCommand cmd = new SqlCommand()) {
      cmd.CommandType = CommandType.Textual content;
      cmd.CommandText =
            @"INSERT typetbl (intcol, strcol, fltcol, deccol, dtcol)
              VALUES(" + intval.ToString() + @",
                    '" + strval + @"',
                     " + fltval.ToString() + @",
                     " + decval.ToString() + @",
                    '" + dtval.ToString() + @"')
              SELECT scope_identity()";

      Console.WriteLine(cmd.CommandText);

      cmd.Connection = cn;
      cn.Open();

      Object res = cmd.ExecuteScalar();

      int identval = Convert.ToInt32(res);
      Console.WriteLine("The inserted row has id " + identval.ToString());
   }}
}

At first look it might appear shorter, and a few readers might even discover it less complicated than InsertTbl, as there is no such thing as a name to further strategies. “We simply construct the question string, let’s persist with the KISS precept and skip the flowery stuff”. (KISS = Preserve it Silly and Easy.) However, as we will see, that is solely silly – it isn’t easy in any respect. The truth that I felt compelled so as to add a diagnostic write of the ensuing command textual content could also be a primary indication of this.

In case you are sizzling on it, you’ll be able to strive it instantly by working this within the command-line window:

clientcode BAD

This can invoke BadInsert. Nevertheless, I ought to warn you that for a very profitable consequence all of the factors beneath have to be true:

  1. In your regional settings in Home windows (or locale on Linux), the decimal separator have to be a interval and never a comma.
  2. The date format in your regional settings should match the date-format setting in SQL Server.
  3. Your database will need to have a Greek collation.

Once I ran this system in on my laptop, I received this output:

INSERT typetbl (intcol, strcol, fltcol, deccol, dtcol)

                 VALUES(123456,

                       ‘alpha (a) and beta (ß)’,

                        5,66666666666667,

                        456,76,

                       ‘2018-09-22 23:25:57’)

                 SELECT scope_identity()

EXCEPTION THROWN: There are fewer columns within the INSERT assertion than values specified within the VALUES clause.

The variety of values within the VALUES clause should match the variety of columns specified within the INSERT assertion.

As you see, an exception was thrown and no knowledge was inserted. Moreover, the error message appears mysterious at first. After we return to the .NET code, and rely columns and values, we will discover no mismatch. However for those who look extra carefully on the generated command you will discover commas within the values for deccol and fltcol, which messes up the T‑SQL syntax. These commas appeared, as a result of ToString respects the regional settings, and I’ve my regional settings in Home windows set to Swedish, the place the decimal delimiter is a comma and never a interval. There may be clearly a easy repair: use one thing extra refined than ToString that lets you management the format. Nevertheless, think about a naïve programmer dwelling in, say, Australia, the place interval is the decimal delimiter, who writes the above. The code works, passes all assessments and is shipped. Then sooner or later it reaches part of the world, for example continental Europe, the place comma is the decimal delimiter. And European customers discover that the software program crashes as quickly as they begin it. You’ll be able to think about how that can have an effect on the status of the software program vendor.

To have the ability to run this system, I modified my regional settings to English (Australia). Now I received a distinct error:

INSERT typetbl (intcol, strcol, fltcol, deccol, dtcol)

                 VALUES(123456,

                       ‘alpha (a) and beta (ß)’,

                        5.66666666666667,

                        456.76,

                       ’22/09/2018 11:33:51 PM’)

                 SELECT scope_identity()

EXCEPTION THROWN: The conversion of a varchar knowledge kind to a datetime knowledge kind resulted in an out-of-range worth.

The assertion has been terminated.

This time, the decimal and float values are good, however as an alternative the worth for dtcol causes an issue. My language setting in SQL Server is us_english and the date-format setting is mdy. That’s, SQL Server expects the worth to be month/day/12 months. Once I ran this on the 22nd of the month, I received the error. For the primary twelve days of the month, the worth would have been inserted – however for eleven of these days it could have been inserted wrongly! Once more, that is one thing that may be addressed by formatting: the date ought to be in a format that’s secure in SQL Server, for example YYYY-MM-DDThh:mm:ss.fff. (The T on this format stands for itself.) However a naïve developer within the US won’t ever discover the issue, as a result of he has the identical setting in Home windows and SQL Server. (And it isn’t any higher with a naïve Swedish developer for that matter. The Swedish format YYYY-MM-DD is appropriately interpreted with the mdy setting.) On this case, the software program might even work at websites in nations like Germany or Australia the place dates are written as day/month/12 months, in the event that they have the date format set to dmy in SQL Server. However in the end there will likely be a mismatch, and there will likely be errors. And even worse, incorrect dates.

Word: for a extra detailed dialogue on date codecs, see the article The ultimate guide to the datetime datatypes by SQL Server MVP Tibor Karaszi.

I made yet one more try to run this system. This time I set my regional settings to English (United States). I received this output:

INSERT typetbl (intcol, strcol, fltcol, deccol, dtcol)

                 VALUES(123456,

                       ‘alpha (a) and beta (ß)’,

                        5.66666666666667,

                        456.76,

                       ‘9/22/2018 11:50:47 PM’)

                 SELECT scope_identity()

The inserted row has id 4

Success! The row was inserted. Nevertheless, once I appeared on the knowledge, I seen a snag. In strcol I noticed this:

alpha (a) and beta (ß)

In the event you look carefully, you’ll be able to see that the Greek alpha has been changed by an everyday lowercase Latin lowercase a, and as an alternative of a lowercase beta, there’s a German “scharfes s”. This is because of the truth that there is no such thing as a N earlier than the string literal within the SQL assertion. Thus, this can be a varchar literal, and for varchar, the character repertoire is outlined by the code web page for the collation. My collation for the check was a Finnish_Swedish collation, which makes use of code web page 1252 for varchar, and this code web page doesn’t embrace alpha and beta. Subsequently, SQL Server changed them by fallback characters it deemed appropriate.

Word: the observant reader might discover that the identical substitute occurred additionally within the output within the command-line window when working this system. This isn’t due to the lacking N, however as a result of the command-line setting has no Unicode help in any respect.

There are extra issues with BadInsert. The clientcode program lets you override the worth for strval on the command line. Run this:

clientcode BAD Brian O'Brien

I had modified again my regional settings to Swedish once I ran this, however it does probably not
matter.

INSERT typetbl (intcol, strcol, fltcol, deccol, dtcol)

                 VALUES(123456,

                       ‘Brian O’Brien’,

                        5,66666666666667,

                        456,76,

                       ‘2018-09-23 00:07:36’)

                 SELECT scope_identity()

EXCEPTION THROWN: Incorrect syntax close to ‘Brien’.

Unclosed citation mark after the character string ‘)

SELECT scope_identity()’.

Due to the one quote in Brian’s title, this time I received a syntax error. Now have a look at this:

clientcode GOOD Brian O'Brien

GOOD invokes InsertTbl and the row is inserted with none ache.

Sure, we will repair BadInsert in order that it doubles the quotes in strval. However is that going to make the SQL code simpler to learn?

And that brings us to the gist of this train. InsertTbl reveals a really structured approach of writing data-access code. You outline the SQL assertion in a single string, which is free from any disruption of .NET syntax, save for the encircling double quotes. Then it’s adopted by a definition of the parameters in a easy however structured approach. In BadInsert, the SQL assertion is blended with .NET syntax, which signifies that the extra variables there are, the tougher it’s to see the SQL forest for all of the .NET bushes.

With BadInsert you will get a conflict between the regional settings within the shopper and the interpretation in SQL Server which has its personal settings and guidelines, completely different from .NET and Home windows. Then again, with InsertTbl, all values are interpreted persistently within the shopper program based on the regional settings on the consumer’s laptop and they’re remodeled to an unambiguous binary format that SQL Server all the time interprets in the identical approach.

With regards to the ultimate mistake, leaving out the N, that is an error you possibly can commit with InsertTbl as effectively, as you might mistakenly use SqlDbType.VarChar as an alternative of SqlDbType.NVarChar which might result in the identical mutilation of the Greek characters. Nonetheless, the .NET interface kind of encourages you to be actual with the info varieties.

The observant reader might have famous that there’s yet one more distinction between the 2 strategies. In InsertTbl, I exploit an output parameter to get the id worth of the inserted row, whereas in BadInsert I return the id with a SELECT assertion and I exploit ExecuteScalar to run the batch and to get the worth that approach. Each of those approaches are completely legitimate, and I wished to indicate examples with each. Which one you utilize is a matter of style and what suits in greatest with the second. (Clearly, you probably have multiple return worth, ExecuteScalar just isn’t an possibility.)

Some readers might really feel that having to specify the precise knowledge kind and particulars like size of strings and precision/scale for decimal is a little bit of a burden. Absolutely .NET which is aimed toward growing developer productiveness, might do higher? Certainly. In the event you do not specify the size for a string or binary parameter, .NET will use the size of the worth you provide, and the same is true for decimal values. Moreover, the SqlParameterCollection class supplies the tactic AddWithValue which infers the kind from the worth you go in. NEVER USE ANY OF THIS, PERIOD! You see, whereas this can be a enhance to developer productiveness, it’s a setback for the productiveness of DBA who’s prone to be the one who has to determine why the database entry is sluggish. Why it’s so, is one thing I’ll return to within the performance chapter. However if you wish to know extra particulars right here and now, you’ll be able to learn the weblog publish AddWithValue is Evil from SQL Server MVP Dan Guzman.

It’s possible you’ll argue that mirroring the size of strings within the .NET software causes a upkeep drawback and you do not wish to change the code if the column is modified from nvarchar(25) to nvarchar(50). If that could be a concern, you should use 100 as a normal worth for all “quick” columns and 4000 for all longer nvarchar and 8000 for lengthy varchar. (For MAX parameters, you need to use -1 as famous above.) There are nonetheless conditions the place this may have an effect on efficiency, however that is one thing which is much more refined, and it’s nothing I’ll talk about on this article. And the impression is nowhere as far-reaching as when you do not specify any size in any respect.

On this part, I solely checked out .NET, however whichever setting you might be utilizing, constructing an SQL string by concatenating consumer enter is completely unacceptable, if you’re writing code in an expert context as an worker or a advisor. You need to all the time use parameters the place the SQL syntax permits. This may increasingly appear to be a really harsh factor to say, however on this part we solely checked out it from the viewpoint of developer ease. Writing and sustaining code with mounted parameterised SQL strings is a lot simpler than inlining parameter values. We are going to have a look at an much more necessary motive within the subsequent chapter from which it will likely be clear why I exploit such sturdy phrases. Word that we are going to maintain working with typetbl and the clientcode program, so do not drop them now.

Watch out for SQL Injection

While you work with dynamic SQL you want to concentrate on SQL injection. Nicely, “conscious” is a little bit of understatement. Each time you construct code dynamically from some enter, you need to ask your self these questions:

  1. Can the enter be crafted in order that the generated SQL code performs one thing else than supposed?
  2. Can this be utilised by a malicious consumer?
  3. Can I defend myself in opposition to it?

On this chapter, we’ll have a look at how the primary two issues can occur and what your technique of safety are. The primary part discusses SQL injection in shopper code, however this doesn’t imply which you could skip this part for those who solely do T‑SQL programming, as a result of the teachings on this chapter are of a basic nature and apply to everybody. The remaining sections on this chapter covers points particular to T‑SQL.

SQL Injection in Shopper Code

What’s SQL injection? SQL injection is when a consumer enters SQL syntax in an enter area or related and causes the applying to carry out one thing else than supposed. Here’s a basic instance, Exploits of a Mom, from the sketch xkcd.


For readers who’ve by no means heard of the idea of SQL injection, this will not register instantly, however let’s strive it with our clientcode program. Let’s first create a College students desk:

CREATE TABLE College students (a int NOT NULL)

(Since we’re going to attempt to drop it, the columns do probably not matter.)

To see if xkcd could be pulling our legs, we strive the title as given within the strip:

clientcode BAD Robert'); DROP TABLE College students; --

No, this didn’t work out:

INSERT typetbl (intcol, strcol, fltcol, deccol, dtcol)

                 VALUES(123456,

                       ‘Robert’); DROP TABLE College students; –‘,

                        5,66666666666667,

                        456,76,

                       ‘2018-09-23 21:47:00’)

                 SELECT scope_identity()

EXCEPTION THROWN: Incorrect syntax close to ‘5’

However for those who look extra carefully at this, you may even see that xkcd is on to one thing. That single quote after Robert appears to suit into the syntax. The only quote that initially was in BadInsert remains to be there – however it’s hidden behind the 2 hyphens that begin a remark. Because it occurs, I launched newlines within the generated assertion to make it simpler to learn, and due to this fact the quantity on the subsequent line causes a syntax error. Many builders wouldn’t hassle about including newlines, and with the assertion on a single line, the generated command would have been syntactically appropriate as such. It might nonetheless have failed because of the mismatch of the variety of columns between the INSERT and VALUES checklist.

None of these are obstacles that can’t be overcome. Subsequent do this:

clientcode BAD Robert', 1, 1, 1); DROP TABLE College students; SELECT 1 WHERE 1 IN ('1

It could appear like a giant piece of nonsense, however have a look at the output:

INSERT typetbl (intcol, strcol, fltcol, deccol, dtcol)

                 VALUES(123456,

                       ‘Robert’, 1, 1, 1); DROP TABLE College students; SELECT 1 WHERE 1 IN (‘1’,

                        5,66666666666667,

                        456,76,

                       ‘2018-09-23 21:51:48’)

                 SELECT scope_identity()

The inserted row has id 1

The batch accomplished with out errors and for those who run SELECT * FROM College students in a question window, you discover that the desk just isn’t there.

To grasp how BadInsert was exploited, let us take a look at the strategies used. To get the unique INSERT assertion to work, we added the digit 1 as many occasions as wanted to make VALUES to match INSERT. And we had been joyful to see that 1 was accepted for all values (there might have been conversion errors or constraint violations). To get the remainder of the syntax to work, we needed to incorporate that checklist of values in a brand new assertion, and a SELECT with IN appeared like the best choose. We do not hassle a few desk within the SELECT, as a result of there’s nothing that claims {that a} column or a variable should come earlier than IN – it might probably simply as effectively be a literal. Lastly, we needed to think about the quote that’s supposed to shut the worth in strval and the comma that follows. As you’ll be able to see, this was achieved by the ultimate '1.

Simply as a distinction: recreate the College students desk and run the identical once more, however changing BAD with GOOD. This additionally completes efficiently, however once you test the College students desk, it’s nonetheless there. In the event you look in typetbl, you discover the primary 25 characters of the injection string, and in fltcol, deccol and dtcol you discover the fixed values from the clientcode.cs file, that’s, the supposed values. Then again, within the row inserted within the injection assault we see one thing else. (The results of all these 1s.) You might also notice that the ident column for the row added with BadInsert has a distinct worth than was reported within the output above. It’s because ExecuteScalar picked up that 1 from the additional SELECT we injected.

It was simple for us to compose the injection string, as a result of we had entry to the supply code. It’s possible you’ll argue that it could be much more troublesome for an intruder who doesn’t see the code to learn how the injection assault ought to be crafted, however that isn’t actually the case. Think about that BadInsert is definitely inside an internet software which is open to everybody on the Web and strval will get its worth from an URL. Or for that matter, a cookie. The evil consumer might first try a easy kind of exploit within the xkcd strip. If the consumer is fortunate, the online server shows the error message from SQL Server, and in such case it is a comparatively easy process to determine it out. That may be dangerous observe in itself; greatest observe is to configure the online server to indicate a generic error message and solely write the precise error message to a neighborhood log file. However that’s nonetheless not a roadblock, solely a small velocity bump that will cease somebody who’s looking for an injection gap manually. In actual life, intruders use automated instruments to seek out SQL injection holes. A website which is written within the fashion of BadInsert will after all have many injection holes to strive. An MVP colleague demoed such a program to me. He directed it at a few web sites, and it took this system possibly ten seconds to find out {that a} website was weak to SQL injection.

It is vitally necessary to grasp that the individuals who have interaction on this will not be lonesome teenage boys who wish to exhibit. No, the explanation individuals spend their time on discovering SQL injection holes spells m-o-n-e-y. If the database handles cash by itself, the cash could be made straight within the system. The cash can be made by stealing the data within the database and promoting it. Or the intruder can use the SQL Server machine as an entrance to the remainder of the community to contaminate all PCs within the organisation in preparation to assault a 3rd get together in a DDOS assault the intruder is paid to carry out. Or the intruder might set up ransomware and blackmail the positioning. Or…

Simply to take one instance from actual life: Marriott introduced in 2018 that just about 500 million clients’ knowledge had been leaked in a significant knowledge breach. And one of many means within the assault was, sure, SQL injection. You’ll be able to seek for Marriott and sql injection in your favorite search engine to seek out articles in regards to the incident. Right here can be a direct hyperlink to one of the articles I discovered.

Whereas the largest risks are with internet purposes uncovered on the Web, this doesn’t imply which you could ignore the danger for SQL injection in intranet purposes or Home windows shoppers. There could also be malicious or disgruntled staff who wish to steal knowledge or simply trigger a multitude on the whole. And there could be computer systems within the organisation which were contaminated with malware that try to assault all machines they discover.

So how do you defend your self in opposition to SQL injection? Within the xkcd strip, the mother talks about sanitising database inputs. What she presumably is alluding to is that the one quotes within the enter string ought to be doubled. However that alone doesn’t defend you in opposition to SQL injection; there are injection assaults that does make use of the one quote. Subsequently, it not unusual to see recommendations that some characters shouldn’t be permitted enter in any respect, however that isn’t a really fruitful option to go, since these characters could also be a part of completely legit enter values. Simply think about that Brian O’Brien finds that he cannot enter his title, however is instructed that it contains an unlawful character!

No, there’s a a lot better option to cease SQL injection and you’ve got already seen it: parameterised statements. When enter values are handed as parameters by SqlParameterCollection and finally to sp_executesql, SQL injection can’t occur, as a result of there is no such thing as a place to inject something. That mentioned, there’s nonetheless yet one more precaution to take: the applying ought to be working with a restricted set of permissions. It’s far too frequent for internet purposes to run with elevated permissions and even as sa. And that’s very dangerous. In spite of everything, even you probably have learnt by now easy methods to keep away from SQL injection, it solely takes one junior programmer who has not and there’s an SQL injection gap. Subsequently, an software ought to by no means be granted greater than membership in db_datareader and db_datawriter. That can restrict the harm of an SQL injection gap. Even higher, the applying ought to solely use saved procedures all through. In that case, all it’s good to grant is EXECUTE permissions. (You are able to do this on schema or database stage, no have to do it per process.)

Word: some readers might include objections like “our software creates tables dynamically” and many others so it will need to have elevated permissions. In that case, you need to package deal these permissions inside saved procedures, and I describe this intimately in my article Packaging Permissions in Stored Procedures. Functions ought to by no means run with elevated permissions, interval!

SQL Injection in Dynamic SQL in Saved Procedures

While you work with dynamic SQL in a saved process, you need to after all use sp_executesql with a parameterised assertion to guard your self in opposition to SQL injection in plain enter values like search parameters and many others. Nevertheless, within the very most circumstances once you construct an SQL string dynamically inside a saved process, it’s as a result of there’s some enter worth that you just can’t go as a parameter, however which have to be inserted into the SQL string, for example a desk title.

Right here is an instance. Charlie Brown is the DBA at a college the place every scholar has their very own little database the place they’ll do no matter they like. Charlie needs to be useful, so he schedules a job that loops by all tables in the scholar databases to defragment them. Right here is the core of that script:

DECLARE @sql  varchar(MAX)

DECLARE table_cur CURSOR STATIC LOCAL FOR
   SELECT 'ALTER INDEX ALL ON [' + name + '] REBUILD ' FROM sys.tables

OPEN table_cur

WHILE 1 = 1
BEGIN
   FETCH table_cur INTO @sql
   IF @@fetch_status <> 0
      BREAK

   PRINT @sql
   EXEC(@sql)
END

DEALLOCATE table_cur

This isn’t an excellent script. Charlie has added brackets across the desk title, so the script is not going to fail if the scholar has created tables with areas or different humorous characters within the title. One flaw is that he has neglected that tables should not have be within the dbo schema, however that’s the small drawback. One of many college students on the college is Joe Cool. Here’s a script to create his database:

CREATE LOGIN JoeCool WITH PASSWORD = 'CollegeKid'
CREATE DATABASE JoeCool 
ALTER AUTHORIZATION ON DATABASE::JoeCool TO JoeCool
SELECT is_srvrolemember('sysadmin', 'JoeCool')

The ultimate SELECT informs you that Joe just isn’t a member of sysadmin. However that’s precisely Joe’s ambition, so in his database, he creates this:

USE JoeCool
go
CREATE TABLE tbl (a int NOT NULL)
CREATE TABLE [tbl]] REBUILD; ALTER SERVER ROLE sysadmin ADD MEMBER JoeCool --]
             (a int NOT NULL)

Sure, that final desk has a humorous title. Simply as humorous as Bobbie Tables within the xkcd strip above. At night time, Charlie’s Agent job runs, working with sysadmin permissions after all. Run the cursor batch above within the database JoeCool, and have a look at the statements (I’ve added a line break for legibility)

ALTER INDEX ALL ON [tbl] REBUILD 
ALTER INDEX ALL ON 
      [tbl] REBUILD; ALTER SERVER ROLE sysadmin ADD MEMBER JoeCool --] REBUILD

Now run this once more:

SELECT is_srvrolemember('sysadmin', 'JoeCool')

Joe is now sysadmin.

Take a look at how Joe constructed his desk title which I print out for readability: tbl] REBUILD; ALTER SERVER ROLE sysadmin ADD MEMBER JoeCool —. (You too can run a question over sys.tables to see it.) He began with the title of one other desk within the database. He then added a proper bracket, which he doubled within the CREATE TABLE script, because it was inside a bracketed title. The aim of the bracket is to stability the left bracket in Charlie’s script. Joe additionally added REBUILD to make the command syntactically full. He then added the command he wished to run, and on the finish he added remark characters to kill the final a part of Charlie’s command.

Word: the command ALTER SERVER ROLE that Joe makes use of was launched in SQL 2012. You’ll be able to use sp_addsrvrolemember for a similar trick on SQL 2008 and earlier.

There’s a quite simple change that Charlie could make to his script to cease Joe’s assault. Earlier than we have a look at it, let’s first deprive Joe of his membership in sysadmin:

ALTER SERVER ROLE sysadmin DROP MEMBER JoeCool

Now change the SELECT assertion within the cursor to learn:

SELECT 'ALTER INDEX ALL ON ' + quotename(title) + ' REBUILD ' FROM sys.tables

Run the cursor once more. That is the output (once more with a line break to suit the web page width):

ALTER INDEX ALL ON [tbl] REBUILD 
ALTER INDEX ALL ON 
      [tbl]] REBUILD; ALTER SERVER ROLE sysadmin ADD MEMBER JoeCool --] REBUILD

At first look, it might appear an identical to the above. Nevertheless:

SELECT is_srvrolemember('sysadmin', 'JoeCool')

now returns 0. Joe was not promoted to sysadmin. In the event you look nearer you’ll be able to see why: the suitable bracket in Joe’s desk title is now doubled so the assertion now achieves Charlie’s purpose: to rebuild the index on the desk with the humorous title. That’s the goal of the quotename operate. Earlier than we glance nearer at this necessary operate, I wish to make a touch upon the instance as such. Usually, you could consider a dynamic desk title as one thing which is available in by a process, and which finally might come from the skin of the server. However as you noticed this instance, the attacker may come from throughout the SQL Server occasion.

Quotename and quotestring

Everytime you construct dynamic SQL statements the place a variable holds the title of one thing, you need to use quotename. quotename returns its enter surrounded by sq. brackets, and any proper brackets are doubled as seen right here:

SELECT quotename('a'), quotename('[a]')

The output is:

[a] [[a]]]


On this approach, you defend your self from SQL injection in metadata names. While you obtain a desk title as a parameter in a saved process, there are some extra precautions which might be extra associated to interface design than SQL injection as such, so we’ll have a look at this later within the style-guide chapter.

quotename accepts a second parameter that lets you specify an alternate delimiter. Listed here are some examples for example this:

SELECT quotename('Brian O''Brien', '''')      -- 'Brian O''Brien'
SELECT quotename('double (") quote', '"')     -- "double ("") quote"
SELECT quotename('(proper) paren', '(')        -- ((proper)) paren)
SELECT quotename('{brace} place', '}')     -- {{brace}} place}
SELECT quotename('proper <angle>', '<')        -- <proper <angle>>>
SELECT quotename('again`tick', '`')            -- `again``tick`
SELECT quotename('no $ cash', '$')           -- NULL

In all circumstances, the closing delimiter is doubled. As you’ll be able to see, for the paired delimiters, you’ll be able to go both the left or the suitable delimiter, however it’s all the time the closing delimiter that’s doubled. Because the final instance suggests, you can not go any character because the delimiter, however all of the supported ones are proven above. Of those, a very powerful is the primary one, which it’s good to use if you wish to incorporate a string worth in your assertion. For instance, here’s a batch that creates a database with the title set dynamically:

DECLARE @dbname   sysname="My New Database",
        @datadir  nvarchar(128),
        @logdir   nvarchar(128),
        @sql      nvarchar(MAX)
SET @datadir = convert(nvarchar(128), serverproperty('InstanceDefaultDataPath'))
SET @logdir = convert(nvarchar(128), serverproperty('InstanceDefaultLogPath'))

SELECT @sql="CREATE DATABASE " + quotename(@dbname, '"') + '
    ON (NAME     = ' + quotename(@dbname, '''') + ', 
        FILENAME = ' + quotename(@datadir + @dbname + '.mdf', '''') + ')
    LOG ON (NAME     = ' + quotename(@dbname + '_log', '''') + ', 
            FILENAME = ' + quotename(@logdir + @dbname + '.ldf', '''') + ')'
PRINT @sql
EXEC(@sql)

This was the assertion generated on my machine:


CREATE DATABASE "My New Database"
    ON (NAME     = 'My New Database', 
        FILENAME = 'S:MSSQLSQLXIMy New Database.mdf')
    LOG ON (NAME     = 'My New Database_log', 
            FILENAME = 'S:MSSQLSQLXIMy New Database.ldf')

Word: this instance doesn’t run on SQL 2008 and earlier. These two parameters to serverproperty had been launched in SQL 2012.

Observe that what I go to quotename is the complete path to the 2 database information; that’s, string concatenation happens inside quotename. For the sake of the instance, I used double quotes to delimit the database title, fairly than the extra frequent sq. brackets. (The latter is Microsoft-specific, whereas the double quote is the ANSI commonplace). As for the opposite delimiters you should use with quotename, the conditions the place you get to make use of them are few and much between.

quotename is a really handy operate, however there’s a lure that it’s good to watch out for. Because the title suggests, quotename is meant for use with object names. Its enter is proscribed to 128 characters (which is the utmost size of an identifier in SQL Server), as seen on this instance:

DECLARE @x nvarchar(255) = replicate('''', 128)
SELECT quotename(@x, '''')
SET @x = replicate('''', 129)
SELECT quotename(@x, '''')

The primary SELECT returns a protracted checklist of single quotes (extra exactly 258 of them!), however the second returns NULL. Thus, so long as you solely use quotename to delimit identifiers in brackets, you might be secure. However once you use quotename to delimit string values in single quotes, it’s good to think about that the worth might exceed 128 characters in size. The CREATE DATABASE instance above is actually weak to this danger. The return kind for the 2 serverproperty parameters is nvarchar(128) and that is additionally the definition of sysname. Thus, the complete file title could be as much as 260 characters lengthy.

If you wish to eradicate this danger, you should use the user-defined features quotestring and quotestring_n. Each settle for nvarchar(MAX) as enter and return the identical knowledge kind. They wrap their enter in single quotes, and double any single quotes throughout the strings. quotestring_n additionally tacks on an N to make the returned string an nvarchar literal. With quotestring, it’s good to cater for this your self. They don’t seem to be a built-in, however listed beneath:

CREATE FUNCTION quotestring(@str nvarchar(MAX)) RETURNS nvarchar(MAX) AS
BEGIN
   DECLARE @ret nvarchar(MAX),
           @sq  nchar(1) = ''''
   SELECT @ret = substitute(@str, @sq, @sq + @sq)
   RETURN(@sq + @ret + @sq)
END
go
CREATE FUNCTION quotestring_n(@str nvarchar(MAX)) RETURNS nvarchar(MAX) AS
BEGIN
   DECLARE @ret nvarchar(MAX),
           @sq  nchar(1) = ''''
   SELECT @ret = substitute(@str, @sq, @sq + @sq)
   RETURN('N' + @sq + @ret + @sq)
END

We are going to see examples of utilizing these features all through on this article.

It’s possible you’ll ask: when ought to I exploit quotename (or quotestring)? Each time you insert an object title or a string worth from a variable into your SQL string, with none exception. Take the database instance above: the 2 paths will not be possible assault vectors, since it’s good to be sysadmin or a Home windows administrator to alter them. However the database title might come from an untrusted supply (like our school child Joe Cool) and include one thing malicious. And even when all that is hypothetic to you, there’s nonetheless an argument of basic robustness: if any of the names would occur to incorporate a single quote, why ought to the command fail with a syntax error?

The Significance of nvarchar

That is an SQL injection lure that I used to be unaware of myself, till I by likelihood was Remus Rusanu’s blog for one thing else as I used to be engaged on this text.

There may be yet one more flaw in Charlie Brown’s script above, and which Joe Cool exploits by creating this desk:

CREATE TABLE [tbl〛 REBUILD; ALTER SERVER ROLE sysadmin ADD MEMBER JoeCool --]
             (a int NOT NULL)

That humorous character after the final character in tbl is the Unicode character U+301B, Proper White Sq. Bracket. Night time comes, and Charlie’s enhanced script utilizing quotename runs and that is printed (with line-breaks added):

ALTER INDEX ALL ON [tbl] REBUILD 
ALTER INDEX ALL ON
      [tbl]] REBUILD; ALTER SERVER ROLE sysadmin ADD MEMBER JoeCool --] REBUILD 
ALTER INDEX ALL ON 
      [tbl] REBUILD; ALTER SERVER ROLE sysadmin ADD MEMBER JoeCool --] REBUILD 

Word that the humorous double-bracket character is now an everyday proper bracket. In the event you run

SELECT is_srvrolemember('sysadmin', 'JoeCool')

you see that Joe Cool is once more sysadmin.

The rationale this occurred is that Chaiile was sloppy in his script when he declared the @sql variable:

DECLARE @sql  varchar(MAX)

It is varchar, not nvarchar. When knowledge is transformed from nvarchar to varchar, any character within the Unicode string that isn’t obtainable within the smaller character set used for varchar is changed with a fallback character. And on this case, the fallback character is a plain proper bracket. quotename didn’t assist right here, as a result of quotename labored on the enter from sys.tables which is nvarchar and the conversion to varchar occurred after quotename had executed its job.

Repair the declaration of @sql and take Joe out of sysadmin:

ALTER SERVER ROLE sysadmin DROP MEMBER JoeCool

Change the variable declaration and run Charlie’s script once more. This time the output is:

ALTER INDEX ALL ON [tbl] REBUILD 
ALTER INDEX ALL ON 
      [tbl]] REBUILD; ALTER SERVER ROLE sysadmin ADD MEMBER JoeCool --] REBUILD 
ALTER INDEX ALL ON 
      [tbl〛 REBUILD; ALTER SERVER ROLE sysadmin ADD MEMBER JoeCool --] REBUILD 

The suitable white sq. bracket remains to be there and is_srvrolemember informs us that Joe just isn’t sysadmin.

This character just isn’t the one one, however there are extra Unicode characters that can be utilized for such exploits. As an example, in his weblog Remus mentions Modifier Letter Apostrophe, U+02BC, which will get changed by a single quote when the string is solid to varchar.

We are able to additionally use this instance to chalk one up for sp_executesql. Since sp_executesql requires you to make use of nvarchar to your @sql variable, you can not open up such a gap; it might probably solely occur to you for those who use EXEC().

Word: Whereas the principle goal of those examples is to reveal that SQL injection can sneak in the place you could not count on it, the foremost mistake of Charlie is one other one: he’s working with elevated permissions when he should not. Everytime you as a server-level DBA run code in a consumer database you can not belief, you need to bracket your instructions in EXECUTE AS USER = ‘dbo‘ and REVERT. This sandboxes you into the present database, and also you not have permissions on server stage, and a consumer like Joe can’t exploit them to turn into sysadmin, or anything he can’t do himself, assuming that he has db_owner rights within the database. There are different assaults that Joe can carry out than the SQL injection assaults we have now seen. As an example, he might add a DDL set off that’s fired on ALTER INDEX and which provides him to sysadmin when Charlie’s script runs.

Different Injection Holes

In all examples to this point, the assault vector has been the place a variable which is meant to carry a single token has been concatenated to the SQL string. Since it’s a single token, safety with parameterisation and quotename is straightforward.

In case you have variables which might be supposed to carry a number of tokens, it may be very troublesome to guard your self in opposition to SQL injection. Right here is one instance, which I nonetheless see far too usually:

SELECT @checklist="1,2,3,4"
SELECT @sql="SELECT * FROM tbl WHERE id IN (" + @checklist + ')'

How do you make sure that @checklist actually is a comma-separated checklist of values and never one thing like '0) DROP DATABASE abc --'? The reply on this specific case is that you do not use dynamic SQL in any respect, however you need to use a list-to-table operate. See my quick article Arrays and List in SQL Server for alternate options.

One other instance is that this: say that you’ve a saved process that returns a consequence set and you might be already utilizing dynamic SQL to provide that consequence set. You resolve to beef it up by including a parameter to specify the type order. It could appear so simple as:

CREATE OR ALTER PROCEDURE my_dynamic_sp .....
                          @sortcols nvarchar(200) = NULL AS
   ...
   IF @sortcols IS NOT NULL
      SELECT @sql +=  ' ORDER BY ' + @sortcols
   ...

However as you perceive by now, this process is extensive open to SQL injection. On this case, you could argue that you’ve all of it below management: you might be displaying the column names to the consumer in a dropdown or another UI gadget, and there’s no place the place the consumer can kind something. That could be true in the present day, however it might change two years from now, when it’s determined so as to add an internet interface, and abruptly the type columns come as parameters in a URL (which the consumer after all has all of the powers to govern). If you end up writing a saved process for software use, you need to by no means make any assumptions in regards to the shopper, even for those who write the shopper code as effectively. Least of all relating to critical safety dangers equivalent to SQL injection.

There are a number of methods to pores and skin this cat. You would parse @sortcols after which apply quotename on the person columns. Personally, I feel I would favor to make use of a three-column table-valued parameter (one column for the place, one for the column title and a bit column for ASC/DESC). Or I would want to thoroughly decouple the shopper from the column names and do one thing like this:

SELECT @sql += CASE @sortcol1 
                    WHEN 'col1' THEN 'col1'
                    WHEN 'col2' THEN 'col2'
                    ...
                    ELSE             'col1'   -- Have to have a default.
                END + ' ' + CASE @isdesc1 WHEN 0 THEN 'ASC' ELSE 'DESC' END

If col2 had been to be renamed to one thing else, the shopper wouldn’t be affected.

And to generalise this: everytime you consider passing variables with multi-token SQL textual content to your saved process in an software, it’s good to work out an answer to your saved process in order that it isn’t open to SQL injection. That is certainly not easy, since you must parse fragments of SQL code, one thing which T‑SQL just isn’t well-equipped for. There could be all causes to rethink your strategy totally. As an example, you probably have the thought of passing a full-blown WHERE clause, you need to most likely not use a saved process in any respect, however assemble all SQL within the shopper. (Passing a WHERE clause from shopper to server just isn’t solely dangerous due to the danger for SQL injection, but in addition as a result of it introduces a good coupling between shopper and server that will likely be a upkeep headache in the long term.) There isn’t a motive to make use of saved procedures for the sake of it. There are eventualities that are very dynamic the place you wish to give customers the flexibility to retrieve knowledge with a excessive diploma of flexibility. Such eventualities are greatest solved client-side, and in such options, the SQL code is prone to be chopped up in small fragments which might be pieced collectively. Clearly, you need to nonetheless defend your self in opposition to SQL injection, and you need to nonetheless use parameters for values in WHERE clauses. Names of columns and tables ought to by no means come from textual content fields or URLs, however from sources you may have full management over.

To this point software code. For a pure DBA utility you’ll be able to allow your self to be a little bit extra liberal. To take one instance, in my article Packaging Permissions in Stored Procedures, I current a saved process GrantPermsToSP of which one parameter is a TVP with permissions to be inserted right into a GRANT assertion. I make no validation of this parameter, and the process is certainly open for SQL injection. Nevertheless, as it’s supposed to be referred to as from SSMS by individuals who’re within the db_owner position, there’s nothing they’ll achieve by exploiting the injection gap – they can’t run extra highly effective instructions by the process than they’ll run straight themselves. And that’s the quintessence: SQL injection is a risk, when it permits customers to raise their permissions past what they’ll do on their very own from SSMS or related. (Take into account that for internet customers who don’t have any direct entry to SQL Server in any respect, which means any command they’ll inject is an elevation.) From this follows that for those who signal your utility process with a certificates to package deal permissions with the process to allow low-privileged customers to run some privileged command in a managed approach (and that’s precisely what you utilize GrantPermsToSP for), you could completely just remember to have no SQL injection gap.

Efficiency

It’s possible you’ll ask if there are any efficiency variations between utilizing dynamic SQL and static SQL in saved procedures. The reply is that so long as you observe just a few tips once you compose your dynamic SQL, there’s hardly any distinction in any respect. (Nicely, for those who produce a 2000-line beast in your shopper and go it to SQL Server, you need to most likely think about shifting it to a saved process, as there’s a price for passing all that code throughout the wire.)

While you assemble your dynamic SQL there are two issues it’s good to observe to keep away from wreaking havoc with efficiency. One among them you may have already learnt. That’s, you need to all the time parameterise your statements and never inline easy parameter values for a number of causes:

  • It makes the code simpler to learn.
  • It makes it simpler to take care of dates.
  • It makes it simpler to take care of enter like Brian O’Brien.
  • You defend your self in opposition to SQL injection.

Now you’ll study another reason: it’s higher for efficiency. Not a lot to your personal particular person question, however for the server as an entire. That is associated to how SQL Server caches question plans. For a saved process, SQL Server appears to be like up the question plan within the cache by the title of the saved process. A batch of dynamic SQL doesn’t have a reputation. As an alternative, SQL Server computes a hash worth from the question textual content and makes a cache lookup on that hash. The hash is computed on the question textual content as-is with none normalisation of areas, higher/decrease, feedback and many others. And most significantly, not of parameter values inlined into the string. Say that you’ve one thing like this:

cmd.CommandText = @"SELECT ... 
                   FROM    ...
                   JOIN    ...
                   WHERE col1 = " + IntValue.Tostring();

Say now that this command textual content is invoked for IntValue = 7, 89 and 2454. This ends in three completely different cache entries. Now think about that the question has a number of search parameters and there’s a multitude of customers passing completely different values. There will likely be a variety of cache entries, and this isn’t good for SQL Server. The extra entries there are, the larger the danger for hash collisions. (That’s, two completely different question texts have the identical hash worth, whereupon SQL Server wants to match the question texts straight.) On high of that, the cache begins to eat a variety of reminiscence that might be higher used elsewhere. Finally, this may result in plans being kicked out of the cache, in order that recurring queries must be recompiled. And final however not least: all this compilation will take a giant toll on the CPU.

I wish to make you conscious of that when SQL Server computes the hash worth, this contains the parameter checklist. Thus, these two calls will lead to two cache entries:

EXEC sp_executesql N'SELECT COUNT(*) FROM Prospects WHERE Metropolis = @metropolis', 
                   N'@metropolis nvarchar(6)', N'London'
EXEC sp_executesql N'SELECT COUNT(*) FROM Prospects WHERE Metropolis = @metropolis', 
                   N'@metropolis nvarchar(7)', N'Abidjan'

Word the distinction in size of the declaration of the @metropolis parameter.

And that is what you get for those who in .NET outline the parameter on this approach:

 cmd.Parameters.Add("@strval", SqlDbType.NVarChar).Worth = metropolis;

While you omit the size, .NET will set the size from the precise worth of metropolis. So this is the reason you need to all the time set the size explicitly for string and binary parameters. And steer clear of AddWithValue.

Word: Being an sincere individual, I have to confess that I’ve lied a bit right here. For quite simple queries, SQL Server will all the time make use of auto-parameterisation, in order that constants are changed by parameter placeholders. The INSERT assertion within the clientcode program is an instance of such a question. There may be additionally a database setting, pressured parameterisation, below which SQL Server replaces about any fixed in a question with a parameter. This setting is certainly supposed to be a cover-up for poorly written purposes that inline parameter values into SQL strings. However you need to by no means make any assumption that auto-parameterisation will save the present, however you need to all the time use parameterised statements.

On this notice I also needs to level out that there’s a configuration possibility optimize for advert hoc workloads. When that is set, the plan for an unparameterised question string just isn’t cached the primary time it seems, however solely a shell question is saved within the cache. The plan is cached provided that the very same question string seems a second time. This reduces the reminiscence footprint, however the hash collisions will nonetheless be there. It is typically thought-about greatest observe to have this selection turned on, however with a well-written software, it ought to probably not matter.

Word that this is applicable to values that come as enter from the consumer or another supply. A question may need a situation like

AND clients.isactive = 1

That’s, the question is hard-wired to return solely lively clients. On this case, there’s completely no motive to make the 1 a parameter.

There may be yet one more measure it’s good to take to scale back cache-littering: all the time schema-qualify your tables, even when all of your tables are within the dbo schema. That’s, you ought to write:

cmd.CommandText = @"SELECT ... 
                   FROM    dbo.sometable
                   JOIN    dbo.someothertable ON ...
                   WHERE   col1 = @id"

The rationale for that is as follows: Assume two customers Jack and Jill. Jack has the default schema Jack and Jill has the default schema Jill. In the event you omit dbo and say solely FROM sometable, SQL Server wants to contemplate that at any cut-off date there might seem a desk named Jack.sometable or Jill.sometable, during which case Jack’s and Jill’s queries ought to go in opposition to these respective tables. For that reason, Jack and Jill can’t share the identical cache entry, however there must be one cache entry every for them. If there are various customers, this may result in many cache entries for a similar question.

Observe that this is applicable to all objects that belong to a schema, not solely tables. In the event you miss a single object, the cache entry can’t be shared by customers with completely different default schemas. Here’s a checklist of object varieties for which it’s good to specify the schema:

  • Tables.
  • Views.
  • Person-defined features of all kinds.
  • Saved procedures.
  • Synonyms.
  • Service Dealer queues.
  • Sequences.
  • Person-defined forms of all kinds. (Desk varieties, CLR varieties and many others).
  • XML Schema collections.

Take into account that this not solely applies to the statements within the dynamic SQL itself, but in addition to the parameter checklist which additionally is a component of what’s cached. As one instance, we had this assertion earlier within the textual content:


   EXEC sp_executesql N'SELECT a FROM @tbl SELECT @a',
                      N'@tbl dbo.mytbltype READONLY, @a int', @mytbl, @a

Word: While you create a consumer with CREATE USER, the default schema is about to dbo, so there’s a truthful likelihood that every one customers have the identical default schema, and thus can share cached question plans even for those who omit the schema. However there is no such thing as a motive to imagine that that is all the time true. There are older instructions to create customers, and they’ll additionally create a schema for the consumer and make that the default schema.

On this part I’ve solely used examples in shopper code, however every part I’ve mentioned applies to dynamic SQL created in a saved process as effectively.

Working with Dynamic SQL – a Fashion Information

You may have learnt the fundamental instructions for dynamic SQL, and you’ve got been warned about SQL injection. Now could be the time to study extra about dynamic SQL from a sensible standpoint. It’s possible you’ll suppose that writing dynamic SQL is an easy process: construct strings, how troublesome can or not it’s? From expertise, I can inform you that it’s tougher to get it proper than it might appear initially.

Writing good dynamic SQL just isn’t solely about talent. No, possibly a very powerful advantage is self-discipline. In case you are constant and cautious once you write your dynamic SQL, you should have fewer bugs and your code will likely be simpler to learn and preserve. Then again, poorly written code that generates dynamic SQL tends to be very troublesome, to not say inconceivable, to learn and perceive. The generated code can simply get buried within the T‑SQL code that generates it with all its single quotes and plusses. Typically, the less of those you may have, the extra readable your code will likely be, and the much less is the danger for errors. You may have already learnt a very powerful gadget: use parameterised statements. However since you can not parameterise every part, that isn’t sufficient. On this chapter I’ll give some tips about easy methods to enhance your dynamic SQL from completely different angles: easy methods to make it extra readable, simpler to troubleshoot, extra strong and safer.

Begin Out Static

Earlier than you begin writing dynamic SQL, it’s good to have an excellent understanding of what the question that you’re aiming to generate goes to appear like. The easiest way is to start out with writing a completely static question, the place you quickly hardcode the components you wish to be dynamic. Upon getting the static question working, you should use that as a place to begin to jot down the code that builds the dynamic question. In the event you attempt to remedy abruptly, you could end up in a scenario the place you do not know if it’s the question logic as such that’s improper, or whether or not you may have constructed the question within the improper approach.

The Necessity of Debug Prints

In case you are new to dynamic SQL the very first thing it’s good to study is to make use of debug prints. All saved procedures that offers with dynamic SQL, ought to embrace a ultimate parameter @debug:

@debug bit = 0 AS

And in in every single place within the code the place you might be about to run some dynamic SQL, you need to have:

IF @debug = 1 PRINT @sql
EXEC sp_executesql @sql, @params, @par1, ...

That is completely important, as a result of each from time to time your SQL string is not going to do what you supposed to, and it might even fail to even compile. In the event you get an error message, and you do not see what the string appears to be like like, you might be in a very darkish room. Then again, for those who see the SQL code, the error could also be instantly obvious.

In case your SQL string may be very lengthy, the assertion might come out as truncated. The PRINT assertion prints at most 8000 bytes, so with nvarchar(MAX) to your @sql variable, you’ll solely see as much as 4000 characters.

One option to keep away from that is to make use of SELECT as an alternative of PRINT. That offers you the string in a grid cell in SSMS, and once you copy it to a textual content window, you could discover that it’s all one line. Fortunately, trendy variations of SSMS have a setting to regulate this. In SSMS, go to Instruments->Choices and test the setting Retain CR/LF on copy or save highlighted as on this image:


Retain CR/LF on copy or save

You too can see that I’ve elevated the utmost for Non XML Knowledge to 2 million (the default is 65536) to keep away from truncation with very lengthy strings. Observe for those who change these settings, none of them have an effect on at the moment open question home windows, however solely home windows you open after the change.

Word: To see the max above 65536, it’s good to have SSMS 18.2 or later. The checkbox for retaining CR/LF was launched in SSMS 16 and it’s unchecked by default. In the event you do not see this setting, you may have an older model of SSMS. In SSMS 2012 and 2014, CR/LF are all the time retained, and in SSMS 2005 and 2008, they’re all the time changed by a single house. In any case, since SSMS is a free download, there’s little motive to make use of an previous model.

Whereas this works, it isn’t with out points:

  • In the event you steadily copy knowledge to Excel, you could want to maintain the Retain setting unchecked. (As a result of if there are line breaks within the knowledge, the mapping to cells in Excel will likely be tousled.)
  • There are conditions the place further consequence units out of your saved process might mess issues up, for example for those who use INSERT-EXEC, so that you would favor to get the code printed to the Messages tab in SSMS.
  • In case your process produces a number of batches of dynamic SQL, it might be extra sensible to have all of them collected within the Messages tab, fairly than having to repeat a number of consequence units from the question grid.

I received a suggestion from Jonathan Van Houtte that addresses the primary level (however not the opposite two) which makes use of the XML capabilities in SQL Server and SSMS:

SELECT @sql AS [processing-instruction(x)] FOR XML PATH('')

Within the grid cell, you get a clickable XML string and once you click on on it, it opens in a brand new window the place you see the SQL textual content with the CR/LF retained. The code is wrapped in a <?x tag, however aside from that, there is no such thing as a mutilation because of the XML. Provided that the SQL string would occur to incorporate the sequence ?>, you possibly can get issues.

My co-worker Jan Swedin got here up with the final word answer to this drawback: a saved process that breaks up the textual content in chunks of 8000 bytes, taking care that every chunk ends with a line break, after which he runs a PRINT on each chunk. Right here is an tailored model of his process:

CREATE OR ALTER PROCEDURE SystemPrint @str nvarchar(max) AS
SET XACT_ABORT, NOCOUNT ON;
DECLARE @noofchunks int, 
        @chunkno    int = 0,  
        @pos        int = 0, -- substring begin place
        @len        int;     -- substring size

SET @str = substitute(@str, nchar(13), '');
SET @noofchunks = ceiling(len(@str) / 4000.0);

WHILE @chunkno < @noofchunks
BEGIN
   SET @len = 4000 - charindex(nchar(10) COLLATE Latin1_General_BIN2,
                               reverse(substring(@str, @pos, 4000)));
   PRINT substring(@str, @pos, @len);
   SET @chunkno += 1;
   SET @pos += @len + 1; -- accumulation + LF
END

Word: there’s a identified situation with this process. If the size of the SQL string is near a full chunk, for example 7998 characters, the previous few characters within the string might not print. It’s left as an train to the reader to grasp how this may occur and repair the problem.

Regardless of which of those strategies you go for, all the time have a debug print of your dynamic SQL. And, sure, I imply ALWAYS!

Error Messages and Line Numbers

While you work with dynamic SQL, you’ll ever so usually encounter error messages. Generally the error is said to the code that generates the dynamic SQL, typically it comes from the dynamic SQL itself. In both case, it may be a compilation error or a run-time error. It goes with out saying that it’s good to perceive the origin of the error, so that you just begin wanting on the proper piece of code. Sadly, that is one thing which is extra difficult than it must be, on account of a characteristic in latest variations of SSMS.

For this part we’ll work with the duty of making an SQL login, a database consumer for that login and including the consumer to the position PlainUsers. Enter for the operation is the login title and the password. We are going to first have a look at the scenario the place the dynamic SQL is generated inside a saved process. Copy this code, which has numerous errors in it, into an empty question window.

USE tempdb
go
IF NOT EXISTS (SELECT * FROM sys.database_principals WHERE title="PlainUsers")
   CREATE ROLE PlainUsers
go
CREATE OR ALTER PROCEDURE add_new_user @title     sysname, 
                                       @password nvarchar(50), 
                                       @debug    bit = 0 AS
   DECLARE @sql nvarchar(MAX)
   SELECT @sql="CREATE LOGIN " + quotename(@title) + ' 
                     WITH PASSWORD = ' + quotename(@password, '''') + ' MUST_CHANGE,
                          CHECK_EXPIRATION
                 CREATE USER ' + quotename(@title) + '
                 ALTER ROLE PlainUsers ADD MEMBER ' + quotename(@title)
   IF @debug = 1
      PRINT @sql
   EXEC sp_exectesql @sql
go
EXEC add_new_user 'nisse', 'TotaL1y#and0m', 1,

Word: In case you are working these examples on SQL 2008 or earlier, you could observe that the code makes use of the syntax ALTER ROLE ADD MEMBER which was added in SQL 2012. Nevertheless, since we’re exploring error messages, this shouldn’t be of a lot sensible significance to you.

The primary error message we encounter once we do this piece of code has no relation to dynamic SQL as such, however serves to focus on a behaviour in trendy variations of SSMS that’s necessary to grasp for this part. That is the output once you run the script in SSMS 2014 or later.

The module ‘add_new_user’ is determined by the lacking object ‘sp_exectesql’. The module will nonetheless be created; nevertheless, it can’t run efficiently till the item exists.

Msg 102, Stage 15, State 1, Line 19

Incorrect syntax close to ‘,’.

In the event you double-click the error message, you will see that that you find yourself on the road for the EXEC assertion after the process, and certainly there’s a trailing comma that shouldn’t be there. And for those who look within the standing bar, you will see that that it says Ln 19, which agrees with the error message. Nevertheless, this isn’t the error quantity that SQL Server experiences. Take into account that SSMS sends one batch, as outlined by the go separator, at a time to SQL Server, which has no data about your question window. The decision to add_new_user is a single-row batch, so the precise line quantity reported by SQL Server is Line 1. That is additionally the road quantity you’d see for those who had been to run this in SSMS 2012 or earlier. You’ll be able to persuade your self of this truth by saving the script to a file and working it by SQLCMD, which makes no changes. Why this necessary to grasp will prevail in only a few moments.

Take away the offending comma (or substitute it with a semicolon) and take a look at once more. This time, the output is:

The module ‘add_new_user’ is determined by the lacking object ‘sp_exectesql’. The module will nonetheless be created; nevertheless, it can’t run efficiently till the item exists.

CREATE LOGIN [nisse]

                     WITH PASSWORD = ‘TotaL1y#and0m’ MUST_CHANGE,

                          CHECK_EXPIRATION

                 CREATE USER [nisse]

                 ALTER ROLE PlainUsers ADD MEMBER [nisse]

Msg 2812, Stage 16, State 62, Process add_new_user, Line 12 [Batch Start Line 18]

Couldn’t discover saved process ‘sp_exectesql’.

(The half Batch Begin Line 18, seems solely in SSMS 16 and later. In case you are utilizing SSMS 2014 and earlier, you’ll not see that half.) Observe right here that the error message contains the process title add_new_user. This is essential. This tells us that the error occurred throughout compilation or execution of the saved process. As a result of the title is current, the error can’t come from the dynamic SQL. (Recall that the dynamic SQL is a anonymous saved process of its personal.) SSMS sees the process title, and realises that there’s little level in modifying the road quantity from SQL Server, and the road quantity proven right here is admittedly line 12 within the saved process, not line 12 within the question window. (On this specific instance, the process occurs to be current within the question window, however many occasions you’ll name saved procedures not seen in your present window.)

Repair the typo and re-run the process. Now we get this output in SSMS 16 and later.

CREATE LOGIN [nisse]

                     WITH PASSWORD = ‘TotaL1y#and0m’ MUST_CHANGE,

                          CHECK_EXPIRATION

                 CREATE USER [nisse]

                 ALTER ROLE PlainUsers ADD MEMBER [nisse]]

Msg 156, Stage 15, State 1, Line 22

Incorrect syntax close to the key phrase ‘CREATE’.

There isn’t a process title within the error message, so the error can’t come straight from inside add_new_user. Might it come from the dynamic SQL? Or might it come from one thing straight within the question window? We might attempt to double-click the error message and see the place we land. In the event you do this, you could not be capable to make a lot sense of it, however for those who strive it a number of occasions with the cursor positioned on completely different strains, you’ll finally realise that the cursor stays the place you had been. You might also get a message field from SSMS about one thing being out of bounds. No matter, for those who test extra carefully, you realise that the error message says line 22, however there are solely 19 strains within the script. And the dynamic SQL is simply 5 strains of code, so the place did that 22 come from?

Above, SSMS was useful and modified the road quantity from SQL Server to a line quantity within the question window, and that is what is going on right here as effectively. However what is the level with modifying the road quantity from the dynamic SQL? Clearly, we wish to see the precise line quantity as reported by SQL Server. Attempting to map that to a line quantity within the question window appears simply foolish. The reply is that there is no such thing as a approach for SSMS to discern whether or not the error message comes from the SQL code straight current within the question window, or some batch of dynamic SQL executed from that window. The error info offered by SQL Server has no such particulars, so SSMS simply assumes that the error comes from one thing straight current within the window.

This results in a really awkward scenario. On this instance, it’s possibly not that troublesome, for the reason that dynamic SQL is simply 5 strains, however say that you just generate a bigger batch of dynamic SQL consisting of fifty to 100 strains of code. On this case, you actually wish to know the road quantity throughout the batch, so that you could discover the place the error actually is.

The method is as follows. You first discover the beginning of the batch you really executed, which on this case is that this line:

EXEC add_new_user 'nisse', 'TotaL1y#and0m', 1

The standing bar informs us that that is line 19. Had the error been on the primary line of dynamic SQL, SQL Server would have reported Line 1 and SSMS would have mentioned 19. Now it was 22, and 22-19+1 = 4. Thus, based on SQL Server, the error is on line 4. The precise error is that the right syntax for the CHECK_EXPIRATION clause is CHECK_EXPIRATION = ON, however SQL Server doesn’t detect the error till it sees the CREATE on the subsequent line. In the event you add the lacking syntax and take a look at once more, you will see that that the batch executes efficiently. (In case you have SQL 2012 or later, as famous above.)

Right here I had a saved process, and due to this fact it was easy to inform whether or not the error was from the code that generated the dynamic SQL, or the dynamic SQL itself. However say that we wish to make this a plain script, as a result of we wish to apply it to completely different servers or databases on completely different events. Paste the script beneath into to the identical question window because the process above, including a go separator after name to add_new_user, after which run what you pasted:

DECLARE @title     sysname="linda",
        @password nvarchar(50) = 'UlTr@-TOp_$ecre1',
        @debug    bit = 1

DECLARE @sql nvarchar(MAX)
SELECT @sql="CREATE LOGIN " + quotename(@title) + ' 
                  WITH PASSWORD = '''  quotename(@password, '''') + ' MUST_CHANGE,
                       CHECK_EXPIRATION = ON
               CREATE USER ' + quotename(@title) + '
               ALTER ROLE PlainUsers ADD MEMBER ' + quotename(@title)
IF @debug = 1
   PRINT @sql
EXEC sp_executesql @sql

That is the output:

Msg 102, Stage 15, State 1, Line 27

Incorrect syntax close to ‘quotename’.

Is that this message coming from the dynamic SQL or from the script itself? On this instance there is no such thing as a process title to information us. However there are some clues. One clue is the looks of quotename within the error message; this implies that it is an error within the script itself, as quotename doesn’t appear to be used throughout the dynamic SQL. One other clue is that the contents of @sql just isn’t printed, which it could be if execution had reached the road that invokes the dynamic SQL. We are able to get but yet one more clue by double-clicking the error message to see if we hit one thing that matches the error message. This time, we find yourself on this line:

                  WITH PASSWORD = '''  quotename(@password, '''') + ' MUST_CHANGE

If we glance nearer, we might notice that there ought to be a + between the sequence of single quotes and quotename. Insert the plus character and run once more:

CREATE LOGIN [linda]

                  WITH PASSWORD = ”UlTr@-TOp_$ecre1′ MUST_CHANGE,

                       CHECK_EXPIRATION = ON

               CREATE USER [linda]

               ALTER ROLE PlainUsers ADD MEMBER [linda]

Msg 102, Stage 15, State 1, Line 22

Incorrect syntax close to ‘UlTr@’.

Msg 105, Stage 15, State 1, Line 22

Unclosed citation mark after the character string ‘ MUST_CHANGE,

               CREATE USER [linda]

                       CHECK_EXPIRATION = ON

               ALTER ROLE PlainUsers ADD MEMBER [linda]’.

Since we see the SQL printed, that could be a sturdy indication that the error is within the dynamic SQL. For the reason that textual content UlTr@ seems within the message textual content, the textual content alone helps us to find the place the error is. However say that the scenario just isn’t that lucky, however we have to translate the error quantity from SSMS to a line quantity within the dynamic SQL. Place the cursor on the primary line within the batch, and on this instance that is the road with the primary DECLARE. That is line 21, and thus the road quantity that SQL Server experiences is 22-21+1 = 2. In case your batch has a number of clean strains earlier than the DECLARE, then you need to put the cursor on the primary of those clean strains – if you included these clean strains once you chosen the question textual content to execute. That’s, the bottom quantity is the primary line of what you chose within the question window.

Word that this highlights the significance of getting these debug PRINTs. In the event you do not see the debug output, you’ll be able to suspect that the error message is within the script itself, and conversely, for those who see the debug output, the error is prone to be within the dynamic SQL. (Though we noticed an exception from that rule with the misspelling of sp_executesql above.)

In case you have a script that produces a number of batches of dynamic SQL, issues can get actually difficult. In the event you get an error message after the printout of a batch of dynamic SQL, the error could also be in that batch – or it might come from the code that generates the subsequent batch. You’ll have to use your frequent sense to seek out out what’s going on. In the event you double-click the error message and the road you find yourself on has no relation to the error message, that is a sign that the error is in dynamic SQL.

Spacing and Formatting

As I mentioned earlier, an necessary advantage once you work with dynamic SQL is self-discipline, and this actually comes into the image relating to spacing. You must take care in order that completely different components of your dynamic SQL do not run into one another. Right here is an instance the place sloppiness is rewarded with an error message:

DECLARE @sql nvarchar(MAX)
DECLARE @tblname sysname="dbo.Prospects"
SELECT @sql="SELECT CompanyName, ContactName, Handle
               FROM" + @tblname +
              'WHERE Metropolis = @metropolis'
PRINT @sql
EXEC sp_executesql @sql, N'@metropolis nvarchar(15)', 'London'

The output is:


SELECT CompanyName, ContactName, Handle

               FROMdbo.CustomersWHERE Metropolis = @metropolis

Msg 102, Stage 15, State 1, Line 36

Incorrect syntax close to ‘.’.

The FROM and the WHERE have been glued with the desk title due to the lacking areas within the SQL string. (As famous above, the road quantity is determined by precisely the place within the window you pasted the command. I ran the above in a window I already had some code in, and SSMS reported the error to be on line 36.)

It is best to make it a behavior to be sure that there’s all the time white house earlier than and after key phrases once you cut up up the string to usher in the contents of a variable or no matter. White house contains newline, and since T‑SQL permits string literals to run over line boundaries, you need to make use of this. This can be a higher instance of the above:

DECLARE @sql nvarchar(MAX)
DECLARE @tblname sysname="dbo.Prospects"
SELECT @sql="SELECT CompanyName, ContactName, Handle
               FROM " + @tblname + '
               WHERE Metropolis = @metropolis'
PRINT @sql
EXEC sp_executesql @sql, N'@metropolis nvarchar(15)', 'London'

The SQL batch now runs efficiently. That is the SQL string that’s printed:

SELECT CompanyName, ContactName, Handle

               FROM dbo.Prospects

               WHERE Metropolis = @metropolis

The indentation is a little bit humorous, however the code is completely readable.

In case you are coming from a language like Visible Fundamental which doesn’t allow string literals to run over line breaks, you could want one thing like this:

DECLARE @sql nvarchar(MAX)
DECLARE @tblname sysname="dbo.Prospects"
SELECT @sql="SELECT CompanyName, ContactName, Handle " +
              ' FROM ' + @tblname + 
              ' WHERE Metropolis = @metropolis '
PRINT @sql
EXEC sp_executesql @sql, N'@metropolis nvarchar(15)', 'London'

Whereas this code as such actually is pretty simple to learn, I nonetheless strongly suggest in opposition to this for multiple motive. One is that the dynamic SQL will likely be one single lengthy line, so if there’s an error message, it is going to all the time be reported to be on line 1 (after you may have utilized the transformations within the earlier part). If the error message is cryptic and the SQL textual content is lengthy, you could be fairly at nighttime about the place the error actually is. A extra basic motive is that in my expertise, the boundaries between all these plusses and quotes is a hazard space the place errors simply creep in as a result of we accidently omit one or the opposite. So the less we have now of them, the higher.

As I mentioned to start with of this chapter: when you find yourself about to construct one thing with dynamic SQL, first write a static question the place you may have pattern values for no matter it’s good to have dynamic at run-time. This serves two functions:

  • You just remember to have the syntax of the question as such appropriate.
  • You get the formatting you want of the question.

I wish to stress that the formatting of the dynamic SQL is necessary for 2 causes:

  1. If there’s a compilation error within the code, the road quantity lets you discover the place the error is. (By no means thoughts that you must undergo the hoops I mentioned within the earlier part.)
  2. The code executes, however doesn’t produce the anticipated outcomes. It actually helps if the question is formatted in a approach you might be used to when you find yourself making an attempt to grasp what’s improper with it.

Generally you utilize a number of statements to construct your dynamic SQL, since you are selecting up completely different items from completely different locations. A chief instance is once you write one thing to help dynamic search situations. A typical passage in such a process might be:

IF @orderid IS NOT NULL
   SET @sql += ' AND O.OrderID = @orderid'

(Observe the house earlier than AND!) Now, for those who go and add a variety of these SQL fragments, you’ll get all of them on a single line, which might not be that readable. On the identical time, placing the closing single quote on the subsequent line would look humorous. The best way I often take care of that is that I introduce a variable that I usually name @nl (for newline):

DECLARE @nl nchar(2) = char(13) + char(10)

After which I write the above as

IF @orderid IS NOT NULL
   SET @sql += ' AND O.OrderID = @orderid' + @nl

This provides me line breaks in my dynamic SQL. This little tip might not register with you instantly, however I’ve discovered this strategy to be very useful to assist me to jot down code that generates dynamic SQL, in order that each the code itself and the generated code is clearly readable.

Coping with Nested Strings

One of the vital charming (irony supposed) issues when working with dynamic SQL is nested string literals. To incorporate a single quote in a string in T‑SQL it’s good to double it. You may have already learnt that relating to strings the place the values come from variables and the place the syntax doesn’t allow you to make use of the variables straight, you need to use quotename or quotestring to get the worth into the SQL string to guard your self in opposition to SQL injection. This has the aspect impact that the quantity of nested quotes is lowered.

To make it clear what I am speaking about, this isn’t the suitable option to do it:

CREATE OR ALTER PROCEDURE create_db @dbname   sysname,
                                    @sizeinmb smallint = 1200,
                                    @logsize  smallint = 200, 
                                    @debug    bit = 0 AS

   DECLARE @datadir nvarchar(128) =
                  convert(nvarchar(128), serverproperty('InstanceDefaultDataPath')),
           @logdir nvarchar(128) = 
                  convert(nvarchar(128), serverproperty('InstanceDefaultLogPath')),
           @sql   nvarchar(MAX)
   SELECT @sql = 
     N'CREATE DATABASE ' + quotename(@dbname) + '
          ON PRIMARY (NAME = N''' + @dbname + ''',
             FILENAME = N''' + @datadir + @dbname + '.mdf'',
             SIZE = ' + convert(varchar, @sizeinmb) + ' MB)
          LOG ON (NAME = N''' +  @dbname + '_log'', 
             FILENAME = N''' + @logdir + @dbname + '.ldf'', 
             SIZE = ' + convert(varchar, @logsize) + ' MB)'
   IF @debug = 1 PRINT @sql
   EXEC(@sql)

As a reminder: the above won’t ever run appropriately on SQL 2008 or earlier as the 2 parameters to serverproperty had been added in SQL 2012.

This code will fail if there’s a single quote within the database title or in any of the set up paths. And it’s open to SQL injection which might matter if create_db was uncovered to plain customers to allow them to create a database in a managed approach. (The process would by some means must run with larger permissions than the customers have themselves.) Add to this that it isn’t trivial to jot down. It took me some makes an attempt to get all of the quotes proper, particularly all these Ns. As a repetition, right here is the correct option to do it:

CREATE OR ALTER PROCEDURE create_db @dbname   sysname,
                                    @sizeinmb smallint = 1200,
                                    @logsize  smallint = 200, 
                                    @debug    bit = 0 AS

   DECLARE @datadir nvarchar(128) =
                  convert(nvarchar(128), serverproperty('InstanceDefaultDataPath')),
           @logdir nvarchar(128) = 
                  convert(nvarchar(128), serverproperty('InstanceDefaultLogPath')),
           @sql   nvarchar(MAX)
   SELECT @sql = 
     N'CREATE DATABASE ' + quotename(@dbname) + '
          ON PRIMARY (NAME = N' + quotename(@dbname, '''') + ',
             FILENAME = N' + dbo.quotestring(@datadir + @dbname + '.mdf') + ',
             SIZE = ' + convert(varchar, @sizeinmb) + ' MB)
          LOG ON (NAME = N' +  quotename(@dbname + '_log', '''') + ', 
             FILENAME = N' + dbo.quotestring(@logdir + @dbname + '.ldf') + ', 
             SIZE = ' + convert(varchar, @logsize) + ' MB)'
   IF @debug = 1 PRINT @sql
   EXEC(@sql)

Right here I’ve used quotename for the inner names of the information as they’ll at most be 128 characters, however quotestring for the file paths. However it could not be improper to make use of quotestring for all 4.

By way of readability it might be a matter of style which you favor, however I counsel that the second is less complicated to jot down. And furthermore, it’s secure for SQL injection.

We are going to have a look at some superior strategies to take care of nested strings within the sections More on Dynamic Names and Nested Strings and Playing with QUOTED_IDENTIFIER.

Word: A greater model of create_db would sanitise @dbname earlier than developing the file paths, in order that it doesn’t embrace any characters with particular that means to the file system, for example slashes tilting forwards or backwards. I’ve left that out because it strays past the scope for this text.

A Entice with Lengthy SQL Strings

Right here is one thing you’ll be able to run into for those who concatenate SQL strings which might be fairly lengthy. Check out the script within the file longquery.sql. You discover there a question which is seemingly nonsensical. So as to get a demo for this part, I took a protracted question from the MSDN boards, however to guard the poster, I changed the unique column names with product names from NorthDynamic. The unique question didn’t use dynamic SQL, however I launched a dynamic database title for the sake of the instance for this part.

Word: to get the output described on this part, it’s good to have SQL 2012 or later. On SQL 2008, you’ll get further errors on account of using IIF.

The anticipated behaviour when working the script is that we are going to get an error message telling us that View_XVUnitsProducts is an invalid object, since there is no such thing as a such view in tempdb. Nevertheless, once we run the script, we get a distinct error:

Msg 102, Stage 15, State 1, Line 49

Incorrect syntax close to ‘View_XVUnits’.

We are able to additionally see within the consequence set that @sql is 4000 characters lengthy precisely, and if we have a look at the contents of @sql, we will see that the string is truncated. We’re utilizing nvarchar(MAX) for our @sql variable, and but the question is truncated?

The underlying motive is discovered within the SQL Server kind system. In case you have an operation with two operands of the identical knowledge kind, the consequence will likely be of the identical knowledge kind because the operands. On this context, nvarchar(20) and nvarchar(40) could be thought-about the identical knowledge kind, whereas nvarchar(MAX) is a distinct knowledge kind. Within the script, the @sql variable is constructed up from numerous nvarchar literals interleaved with some calls to quotename. All string literals are lower than 4000 characters in size. For that reason the kind of the expression is nvarchar(4000). The very fact there’s an nvarchar(MAX) variable on the left aspect of the project operator doesn’t change this. Thus, the web result’s that the string expression is silently truncated.

Word: for some further spice, strive changing tempdb within the script with NorthDynamic. The best way truncation strikes this time, the error message is unquestionably not simple to grasp. At the very least so long as you don’t have a look at the debug output.

At first, it might appear that to get this working, we have now to wrap every string in convert or solid to alter the info kind to nvarchar(MAX), which might make the code tougher to learn. Fortunately, it isn’t that dangerous. SQL Server has a rule which says that when two differing types meet in the identical expression, the kind with lowest priority within the SQL Server kind system is transformed to the kind with larger priority (if an implicit conversion exists, that’s). For that reason, it’s enough to introduce an preliminary nvarchar(MAX) worth firstly of the concatenation, and this causes the conversion to snowball by all the concatenation. Here’s a potential answer (I am solely displaying the start of the script):

DECLARE @sql nvarchar(MAX),
        @dbname sysname="tempdb"

SELECT @sql = solid('' AS nvarchar(MAX)) + N'

The error message is now the anticipated one:

Msg 208, Stage 16, State 1, Line 2

Invalid object title ‘tempdb.dbo.View_XVUnitsProducts’.

We are able to additionally see within the consequence set that the string is 4423 characters lengthy. That’s, by including an empty string solid to nvarchar(MAX) we received the question working with minimal littering.

The @sql variable itself can serve for the duty, and in lots of circumstances it comes naturally, since you add to the @sql string piece by piece. On this instance it might be like this:

DECLARE @sql nvarchar(MAX) = '',
        @dbname sysname="tempdb"

SELECT @sql = @sql + N'

However there are two new traps hiding right here. The primary is one you’d discover shortly. In the event you fail to initialise @sql to the empty string, all the string will likely be NULL, and nothing could be executed. The opposite lure is that you just (or another person) could also be tempted to introduce the shortcut operator +=:

DECLARE @sql nvarchar(MAX) = '',
        @dbname sysname="NorthDynamic"

SELECT @sql += N'

This can deliver again the syntax error, as a result of on the right-hand aspect of the += operator, there are actually solely quick strings, and thus there is no such thing as a implicit conversion to nvarchar(MAX). For that reason, including

solid('' AS nvarchar(MAX)) + N'

is an effective security precaution.

Receiving Names of Tables and Different Objects as Parameters

There are conditions the place you wish to obtain the title of a desk or another object as a parameter. Having desk names as a parameter to a process in an software just isn’t the most effective of concepts, one thing I’ll talk about additional within the final chapter of this text. However there are actually conditions the place this could be fascinating for a DBA process. Think about this process which can be not be the most effective instance of a sensible use case, however it serves as a brief and concise instance:

USE NorthDynamic
go
CREATE OR ALTER PROCEDURE GetCount @tblname nvarchar(1024) AS
   DECLARE  @sql nvarchar(MAX)
   SELECT @sql="SELECT COUNT(*) FROM " + @tblname
   PRINT @sql
   EXEC sp_executesql @sql
go
EXEC GetCount 'dbo.Prospects'

The instance runs, however as you’ll be able to see, the process is extensive open to SQL injection. Neither is it in a position to deal with names with particular characters in them. We’ve learnt that we must always use quotename to deal with this. But when we strive:

CREATE OR ALTER PROCEDURE GetCount @tblname nvarchar(1024) AS
   DECLARE  @sql nvarchar(MAX)
   SELECT @sql="SELECT COUNT(*) FROM " + quotename(@tblname)
   PRINT @sql
   EXEC sp_executesql @sql
go
EXEC GetCount 'dbo.Prospects'

We get an error message:

SELECT COUNT(*) FROM [dbo.Customers]

Msg 208, Stage 16, State 1, Line 7

Invalid object title ‘dbo.Prospects’.

Some readers could also be puzzled by this, as a result of at first look it appears to be like appropriate. Is not there a desk with this title in NorthDynamic? Nah, there’s a desk Prospects within the dbo schema, and with brackets appropriately utilized, that is written as [dbo].[Customers]. However when there is just one pair of brackets, SQL Server takes dbo.Prospects as a desk title in one-part notation, and appears for it within the default schema of the present consumer. That’s, the dot is a part of the title; it isn’t a separator when it’s contained in the brackets.

Right here is one other instance the place it goes improper:


EXEC GetCount '[Order Details]'

The output:

SELECT COUNT(*) FROM [[Order Details]]]

Msg 208, Stage 16, State 1, Line 10

Invalid object title ‘[Order Details]’.

This time it failed, as a result of the caller already had wrapped the title in brackets.

The best way to resolve that is to first use the parsename operate to get any database half in @tblname. parsename is a operate that returns the varied components of an object specification with the components numbered from proper to left. That’s, 1 is the item title itself, 2 is the schema, 3 is the database title and 4 is the title of any linked server. If a component is lacking from the specification, NULL is returned. Then use the object_id operate to get the item id for the desk/view and the operate db_id to get the id of the database in @tblname, or the id of the present database, if no database part is current. Lastly, you utilize the metadata operate db_name to translate database id again to a reputation along with the features object_schema_name and object_name to get the names of the schema and the item itself from the item and database ids. You wrap every part in quotename. Right here is an instance, with feedback added to elucidate the varied steps.

CREATE OR ALTER PROCEDURE GetCount @tblname nvarchar(1024) AS
   DECLARE  @sql       nvarchar(MAX),
            @object_id int, 
            @db_id     int

   -- Supporting names on linked servers would require an entire lot extra work.
   IF parsename(@tblname, 4) IS NOT NULL
   BEGIN
      RAISERROR('Tables on a distinct server will not be supported.', 16, 1)
      RETURN 1
   END

   -- Get object id and validate.
   SELECT @object_id = object_id(@tblname)
   IF @object_id IS NULL
   BEGIN
      RAISERROR('No such desk/view "%s".', 16, 1, @tblname)
      RETURN 1
   END
   
   -- Get the database id for the item.
   IF parsename(@tblname, 3) IS NOT NULL
      SELECT @db_id = db_id(parsename(@tblname, 3))
   ELSE 
      SELECT @db_id = db_id()   

   -- Get the normalised title.
   SELECT @tblname = quotename(db_name(@db_id)) + '.' + 
                     quotename(object_schema_name(@object_id, @db_id)) + '.' + 
                     quotename(object_name(@object_id, @db_id))

   SELECT @sql="SELECT COUNT(*) FROM " + @tblname
   PRINT @sql
   EXEC sp_executesql @sql

Because the remark says, the process doesn’t help objects on linked servers as that will be an entire lot extra difficult, and I depart this as an train to the reader who wants this. This process helps objects in different databases, however in lots of circumstances you’ll be able to allow your self to constrain a lookup like this to the present database. In that case, you need to add parsename(@tblname, 3) IS NOT NULL to the preliminary test and alter the error message accordingly.

Listed here are some check circumstances:

CREATE DATABASE [extra-database]
go
USE [extra-database]
go
CREATE SCHEMA further;
go
CREATE TABLE further.testtable(a int NOT NULL)
INSERT further.testtable(a) SELECT TOP 65 object_id FROM sys.objects
go
USE NorthDynamic
go
EXEC GetCount 'dbo.Prospects'
EXEC GetCount '[Order Details]'
EXEC GetCount 'sys.schemas'
EXEC GetCount 'msdb.dbo.sysjobs'
EXEC GetCount '[extra-database].further.testtable'
EXEC GetCount 'OTHERSERVER.grasp.sys.databases'
go
DROP DATABASE [extra-database]

When working the assessments, we get this output within the Messages tab (with the “rows affected” messages omitted):

SELECT COUNT(*) FROM [NorthDynamic].[dbo].[Customers]

SELECT COUNT(*) FROM [NorthDynamic].[dbo].[Order Details]

SELECT COUNT(*) FROM [NorthDynamic].[sys].[schemas]

SELECT COUNT(*) FROM [msdb].[dbo].[sysjobs]

SELECT COUNT(*) FROM [extra-database].[extra].[testtable]

Msg 50000, Stage 16, State 1, Process GetCount, Line 9 [Batch Start Line 50]

Tables on a distinct server just isn’t supported.

John Sensible learn this and appreciated the thought. He determined to package deal the above in a user-defined operate, SanitizeObjectName, which he was type to share. This allows you to run

SELECT dbo.SanitizeObjectName('Order Particulars')

and get again [NorthDynamic].[dbo].[Order Details]. His script additionally contains the operate SanitizeObjectNameEx which has just a few extra bells and whistles. It lets you request an error if the item can’t be discovered, and you may specify a distinct default schema than dbo.

You’ll be able to apply the identical sample to different forms of objects. For issues like saved procedures and different issues that reside in sys.objects the instance works as given (save for the SELECT COUNT(*) clearly). For different lessons of objects there are in lots of circumstances metadata features you should use in the identical method; that’s, you’ll be able to translate to id after which again to a reputation you wrap in quotename. Alas, not all metadata features perceive quoted identifiers as testified by this instance:

SELECT user_id('dbo'), user_id('[dbo]')   -- Returns 1, NULL

Fortunately, these are features for lessons of objects that don’t reside in a schema, and which neither could be addressed throughout databases, so the names all the time encompass a single part. This enables for a better answer: test if the title is already wrapped in brackets, and in such case take away them earlier than making use of quotename. Right here is an instance with some check circumstances:

CREATE USER NormalUser WITHOUT LOGIN
CREATE USER AlsoNormal WITHOUT LOGIN
CREATE USER [Space user] WITHOUT LOGIN
go
CREATE OR ALTER PROCEDURE MyDropUser @username sysname AS
   DECLARE @sql nvarchar(MAX)

   SELECT @sql="DROP USER " +
      quotename (CASE WHEN left(@username, 1) = '[' AND
                           right(@username, 1) = ']'
                      THEN substring(@username, 2, len(@username) - 2)
                      ELSE @username
                 END)
   PRINT @sql
   EXEC(@sql)
go
EXEC MyDropUser 'NormalUser'
EXEC MyDropUser '[AlsoNormal]'
EXEC MyDropUser 'House consumer'

This methodology doesn’t deal with precisely every part. As an example, if a consumer title really begins and ends in brackets, issues are prone to go improper. However it might be ok for you.

Some readers might imagine that if the title is wrapped in brackets already, we might take the string as-is with out wrapping it in quotename, however that will be improper. All say after me please: SQL injection!

Dynamic Database and Server Names

It isn’t unusual that the database title is dynamic. It might be that you just wish to do one thing in numerous databases or as a result of you may have completely different databases for various years, clients or no matter. A variation of this theme is that in the identical saved process you wish to run instructions each within the native database and within the grasp database. On this latter case, there’s probably not something dynamic with the database title, however it’s good to swap databases.

Many individuals do as on this instance:

DECLARE @dbname sysname="NorthDynamic",
        @sql    nvarchar(MAX)
SELECT @sql="USE " + quotename(@dbname) + ' 
               SELECT * FROM dbo.Prospects WHERE CustomerID = @custid'
PRINT @sql
EXEC sp_executesql @sql, N'@custid nchar(5)', N'ALFKI'

That works, however there’s a completely different approach that I discover cleaner:

DECLARE @dbname        sysname="NorthDynamic",
        @sql           nvarchar(MAX),
        @sp_executesql nvarchar(200)
SELECT @sp_executesql = quotename(@dbname) + '.sys.sp_executesql'
SELECT @sql = N'SELECT * FROM dbo.Prospects WHERE CustomerID = @custid'
PRINT @sql
EXEC @sp_executesql @sql, N'@custid nchar(5)', N'ALFKI'

I am making use of two issues right here:

  1. EXEC accepts a variable for the process title.
  2. A system process executes within the context from the database it was invoked, additionally with three-part notation.

You would argue that a bonus with the USE assertion is that the debug print makes it clear during which database the SQL batch is executed. That’s actually true, however you possibly can tackle this by including a remark to the SQL string.

We learnt within the part on EXEC() AT that we will use sp_executesql to run instructions on a linked server that’s one other SQL Server occasion. If you need the server title to be dynamic, you should use the identical method:

DECLARE @servername    sysname="YOURSERVER",
        @dbname        sysname="NorthDynamic",
        @sql           nvarchar(MAX),
        @sp_executesql nvarchar(200)
SELECT @sp_executesql = quotename(@servername) + '.' + 
                        quotename(@dbname) + '.sys.sp_executesql'
SELECT @sql = N'SELECT * FROM dbo.Prospects WHERE CustomerID = @custid'
PRINT @sql
EXEC @sp_executesql @sql, N'@custid nchar(5)', N'ALFKI'

Since USE just isn’t simple to use on this case, this answer comes out as actually highly effective right here.

I ought to add that this method is nice once you wish to run DDL instructions or an occasional question in one other database or server, however you probably have plentiful of cross-database or cross-server queries in your software the place you do not wish to hardcode the database/server names, you need to use synonyms as an alternative, which relieves your from utilizing dynamic SQL in any respect. I’ll talk about this additional within the part Setting up Synonyms for Cross-Database Access.

Doing One thing in All Databases

There are conditions once you wish to carry out an motion in all databases (or a collection of databases), and that motion might embrace an operation on a set of tables, views or no matter within the databases. There are two system procedures that are widespread for this goal, sp_MSforeachdb and sp_MSforeachtable. They’re undocumented and unsupported, however you’ll be able to simply discover examples on the net.

Personally, I’m not fond of those procedures, because the ensuing code tends to be lower than readable. If I wish to do one thing in each database, I want to jot down my very own cursor loop – that’s in any case probably not rocket science. And if I wish to carry out the identical motion on numerous objects, I fairly write a question in opposition to the system catalogue to generate the statements. Whether it is only a one-off, I merely copy the consequence into a question window and execute it. But when I would like it in a script, I concatenate the rows right into a single string and execute that string.

I gives you an instance that you should use a boilerplate to your wants, after which I briefly contact one other instance on the finish of this part. The duty is that we have to grant SELECT permissions on all views to a sure position in all databases the place this position exists. As a result of we’re solely granting choose on a sure object kind, we can’t grant permission on schema or database stage – we do not wish to give the customers entry to the bottom tables – so we have to grant entry per object. (Placing all views in a separate schema, so we might grant entry on the schema? That is an ideal concept! However assume for the instance that this isn’t possible. Perhaps we didn’t simply consider it once we made our unique design and now we’re caught with it.) Right here is the script:

DECLARE @db            sysname,
        @sp_executesql nvarchar(500),
        @question         nvarchar(MAX),
        @dbstmts       nvarchar(MAX),
        @position          sysname="ourrole"

SELECT @question =
    'IF user_id(@position) IS NOT NULL
     BEGIN
        SELECT @dbstmts =
           (SELECT  ''GRANT SELECT ON '' + 
                    quotename(s.title) + ''.'' + quotename(v.title) + 
                    '' TO '' + quotename(@position) + char(13) + char(10)
            FROM    sys.views v
            JOIN    sys.schemas s ON s.schema_id = v.schema_id
            WHERE   NOT EXISTS (SELECT * 
                                FROM   sys.database_permissions dp
                                WHERE  dp.major_id = v.object_id
                                  AND  dp.kind=""SL''
                                  AND  dp.grantee_principal_id = user_id(@position))
            FOR XML PATH(''''), TYPE).worth(''.'', ''nvarchar(MAX)'')
     END
     ELSE 
        SELECT @dbstmts = NULL'

DECLARE dbcur CURSOR STATIC LOCAL FOR
   SELECT quotename(title) FROM sys.databases
   WHERE  database_id > 4
     AND  state = 0         -- Solely on-line databases
     AND  is_read_only = 0
   ORDER BY title

OPEN dbcur

WHILE 1 = 1
BEGIN
   FETCH dbcur INTO @db
   IF @@fetch_status <> 0
      BREAK

   SELECT @sp_executesql = @db + '.sys.sp_executesql'

   EXEC @sp_executesql @question, N'@dbstmts nvarchar(MAX) OUTPUT, @position sysname', 
                       @dbstmts OUTPUT, @position
   PRINT @db
   IF @dbstmts IS NOT NULL
   BEGIN
      PRINT @dbstmts
      EXEC @sp_executesql @dbstmts
   END
END

DEALLOCATE dbcur

The script begins with numerous variables. @db is the database we’re at the moment working with within the loop and @sp_executesql will maintain the three-part title for sp_executesql in @db, as we mentioned within the earlier part. @question is the question we run in @db to generate @dbstmts which performs the precise work. All these 4 variables are a part of the boilerplate, however the final @position is particular to this instance: that is the position we’ll grant rights to. (I figured it was higher to place the position title in a variable, in case we wish to carry out this motion for multiple position.)

We first arrange @question. It first checks if @position is current on this database (however it doesn’t care that that is really a task, however any database principal will likely be accepted). If @position is current, @question runs the principle question in opposition to sys.views to get the views, filtering out views that @position already has entry to. The SELECT checklist builds a GRANT assertion for every view; the assertion is adopted by CR+LF to make the output simple to learn, in case we wish to examine it.

It’s possible you’ll not have seen this FOR XML PATH factor earlier than: this can be a option to convert a consequence set right into a concatenated string. It really works, however the syntax just isn’t the prettiest. In case you have no have to help SQL 2016 and earlier, you’ll be able to as an alternative use string_agg which is so much cleaner. There’s a model with string_agg beneath.

The subsequent step is to arrange a cursor over sys.databases in order that we will loop over them. The cursor is STATIC LOCAL. It’s totally past the scope of this text, however I strongly suggest that you just all the time arrange your cursors this fashion, since in any other case there could be nasty surprises. (The above-mentioned sp_MSforeachdb makes use of DYNAMIC GLOBAL, and there are experiences of sp_MSforeachdb skipping databases.) Many occasions it would be best to modify the filtering so that you just embrace solely databases of curiosity to you. On this instance, I exclude the common system databases, grasp, tempdb, mannequin and msdb, however I make no effort to filter out databases for SSRS, replication and many others. For this specific instance, I belief that the test on the @position inside @question will filter out such databases anyway. However would you be doing one thing like making a DDL set off in all consumer databases, you actually must be cautious so you do not create them in system-added databases.

I’ve two extra filters: one to exclude databases that aren’t on-line, and one to exclude read-only databases. I’d counsel that these are filters you’ll embrace as a rule. All relying in your wants you could wish to add extra situations to exclude / embrace databases.

Contained in the cursor loop it’s fairly straight-forward. We first execute @question to get the statements to execute within the present database within the loop. In lots of circumstances once you use this method, @question will solely have one parameter, that’s, the output parameter @dbstmts. Nevertheless, on this specific instance we additionally ship in @position as a parameter. It’s a matter of style, however I’ve opted to all the time print out the database title, even when there’s nothing to execute for a specific database. Lastly within the loop, we print @dbstmts and execute the SQL returned from @question.

Right here is an alternate assertion to construct @question that makes use of the string_agg operate which is prettier than the FOR XML PATH hocus pocus, however as famous it requires SQL 2017 or later:

SELECT @question =
    'IF user_id(@position) IS NOT NULL
     BEGIN
        SELECT @dbstmts =  string_agg(convert(nvarchar(MAX), 
                              ''GRANT SELECT ON '') + 
                                 quotename(s.title) + ''.'' + quotename(v.title) + 
                              '' TO '' + quotename(@position),  
                              nchar(13) + nchar(10))
            FROM    sys.views v
            JOIN    sys.schemas s ON s.schema_id = v.schema_id
            WHERE   NOT EXISTS (SELECT * 
                                FROM   sys.database_permissions dp
                                WHERE  dp.major_id = v.object_id
                                  AND  dp.kind=""SL''
                                  AND  dp.grantee_principal_id = user_id(@position))
     END
     ELSE 
        SELECT @dbstmts = NULL'

The convert to nvarchar(MAX) is required to keep away from truncation if the result’s longer than 4000 characters.

You will notice extra of concatenation with FOR XML PATH and string_agg in later sections, and never the least within the part on dynamic pivot.

A while again there was a query in a discussion board the place the poster wished to discover a sure string on her server the place she had many databases. In order that was the duty: during which tables and during which databases are you able to discover a sure string? I figured that I ought to assist her, so I set off utilizing the boilerplate above. And, consider me, it was arduous to get it proper when there are a number of layers of dynamic SQL. I’m not going to take up house with the script right here, however you discover it in searchalldatabases.sql.

Cursors and Dynamic SQL

Cursors are one thing you need to use sparingly, and the conditions the place you want to make use of dynamic SQL and a cursor in combos are few and much between. However, I’ve a tip to share, so right here it goes.

I all the time inform individuals to arrange their cursors this fashion:

DECLARE cur CURSOR STATIC LOCAL FOR
   SELECT ...

STATIC ensures that you do not get the default cursor kind, which is DYNAMIC and which is susceptible to essentially dangerous efficiency. LOCAL signifies that the scope of the cursor is native to the saved process. That’s, when the process exits, the cursor goes away. The default is {that a} cursor is international to your course of. This may end up in nasty errors if the cursor loop is interrupted half-way, and also you re-enter the process on the identical connection, because the cursor nonetheless exists at this level.

Nevertheless, LOCAL doesn’t work for those who for no matter motive wish to arrange the cursor with dynamic SQL. In the event you do this:

DECLARE @title sysname,
        @sql  nvarchar(MAX)

SELECT @sql="DECLARE cur CURSOR STATIC LOCAL FOR
       SELECT TOP (5) CompanyName FROM dbo.Prospects
       ORDER  BY CustomerID"

EXEC sp_executesql @sql

OPEN cur

You get this error message for the OPEN assertion:

Msg 16916, Stage 16, State 1, Line 9

A cursor with the title ‘cur’ doesn’t exist.

It is best to know why by now: the code inside @sql is a saved process of its personal, and LOCAL means native to that scope.

However there’s a approach out that I initially learnt from Anthony Faull. You should utilize a cursor variable:

DECLARE @title sysname,
        @sql  nvarchar(MAX),
        @cur  CURSOR

SELECT @sql="SET @cur = CURSOR STATIC FOR
       SELECT TOP (5) CompanyName FROM dbo.Prospects
       ORDER  BY CustomerID 
   OPEN  @cur"

EXEC sp_executesql @sql, N'@cur CURSOR OUTPUT', @cur OUTPUT

WHILE 1 = 1
BEGIN 
   FETCH @cur INTO @title
   IF @@fetch_status <> 0
      BREAK
   
   PRINT @title
END

Cursor variables have been within the product for a very long time (since SQL 7), however they’ve remained a kind of obscure options that few individuals find out about and even much less use. (Till Anthony despatched me his instance, I had by no means thought-about them myself.)

Take note of the truth that the cursor is about up with SET fairly than DECLARE. I nonetheless specify STATIC, however LOCAL just isn’t wanted (neither is it permitted), for the reason that scope of the cursor is similar as for the variable that holds the cursor. Observe that the OPEN assertion have to be contained in the dynamic SQL. In the event you put it after the decision to dynamic SQL, you get this error:

Msg 16950, Stage 16, State 2, Line 13

The variable ‘@cur’ doesn’t at the moment have a cursor allotted to it.

 

This is likely one of the only a few circumstances the place there’s a distinction between the anonymous saved procedures created with sp_executesql and “actual” saved procedures. With the latter, it’s good to insert the key phrase VARYING between CURSOR and OUTPUT within the parameter declaration. However VARYING just isn’t permitted with sp_executesql, and neither is it wanted.

A Clear Solution to Cope with Dynamic Names

We’re beginning to arrive on the extra superior sections of this chapter. Though the concepts on this part is one thing that everybody can have use for at occasions, inexperienced readers might want to leap to the subsequent chapter at this level.

Dynamic SQL is troublesome to learn and preserve because of the mixture of the syntax within the internet hosting process and the dynamic SQL. The sorest factors are the place you break up the SQL string to splice in a variable for one thing that can’t be a parameter. As an example, we beforehand appeared on the process create_db which incorporates this assertion:

   SELECT @sql = 
     N'CREATE DATABASE ' + quotename(@dbname) + '
          ON PRIMARY (NAME = N' + quotename(@dbname, '''') + ',
             FILENAME = N' + dbo.quotestring(@datadir + @dbname + '.mdf') + ',
             SIZE = ' + convert(varchar, @sizeinmb) + ' MB)
          LOG ON (NAME = N' +  quotename(@dbname + '_log', '''') + ', 
             FILENAME = N' + dbo.quotestring(@logdir + @dbname + '.ldf') + ', 
             SIZE = ' + convert(varchar, @logsize) + ' MB)'

However you’ll be able to keep away from all these single quotes, plusses and performance calls. Here’s a model of @sql which tells what you wish to do way more succinctly:

   SELECT @sql = 
     N'CREATE DATABASE @dbname
          ON PRIMARY (NAME = @datafilename, FILENAME = @datafilepath, 
                      SIZE = @sizeinmb MB)
          LOG ON (NAME = @logfilename, FILENAME = @logfilepath, 
                  SIZE = @logsize MB)'

No, this isn’t authorized syntax and these items beginning with @ will not be actually variables, however fairly they’re placeholders which you substitute with the true values with assist of the substitute operate. Right here is the complete code of the brand new model of create_db along with a check case:

CREATE OR ALTER PROCEDURE create_db @dbname   sysname,
                                    @sizeinmb smallint = 1200,
                                    @logsize  smallint = 200, 
                                    @debug    bit = 0 AS

   DECLARE @datadir nvarchar(128) =
                  convert(nvarchar(128), serverproperty('InstanceDefaultDataPath')),
           @logdir nvarchar(128) = 
                  convert(nvarchar(128), serverproperty('InstanceDefaultLogPath')),
           @sql   nvarchar(MAX)

   SELECT @sql = 
     N'CREATE DATABASE @dbname
          ON PRIMARY (NAME = @datafilename, FILENAME = @datafilepath, 
                      SIZE = @sizeinmb MB)
          LOG ON (NAME = @logfilename, FILENAME = @logfilepath, 
                  SIZE = @logsize MB)'

   SELECT @sql = substitute(@sql, '@dbname', quotename(@dbname))
   SELECT @sql = substitute(@sql, '@datafilename', 
                               'N' + quotename(@dbname, ''''))
   SELECT @sql = substitute(@sql, '@datafilepath', 
                               dbo.quotestring_n(@datadir + @dbname + '.mdf'))
   SELECT @sql = substitute(@sql, '@sizeinmb', convert(varchar, @sizeinmb))
   SELECT @sql = substitute(@sql, '@logfilename', 
                               'N' + quotename(@dbname + '_log', ''''))
   SELECT @sql = substitute(@sql, '@logfilepath', 
                               dbo.quotestring_n(@logdir + @dbname + '.ldf'))
   SELECT @sql = substitute(@sql, '@logsize', convert(varchar, @logsize))

   IF @debug = 1 PRINT @sql
   EXEC(@sql)
go
EXEC create_db [Test'Dβ], @debug = 1
go
DROP DATABASE [Test'Dβ]

Every substitute additionally makes certain that the worth is appropriately quoted. It’s possible you’ll notice that @sql has completely different variables/placeholders for the database title and the title of the first knowledge file, though the worth is similar. That is required, for the reason that worth is to be quoted otherwise within the two locations. You might also notice that on this instance, I exploit quotestring_n for the file paths, since they probably could be greater than 128 characters lengthy. Word additionally that there’s some significance to have the N earlier than the string literals everywhere, or else the Greek letter β will likely be changed by its look-alike, the German “scharfes-s”, ß.

This instance might not be totally persuasive. Though the @sql string as such is extra concise and simpler to grasp, the whole size of create_db greater than doubled, so I’d count on many readers to want the earlier model. However think about an extended SQL string that runs over twenty strains or extra, with only a couple distinctive “variables” which might be repeated a number of occasions within the string. Now this method is turning into extra enticing. My pondering is that this can be a trick to have in your toolbox, once you really feel that the string concatenation is threatening your psychological sanity, however that is nothing you’d use in each case once you work with dynamic names. Use what you suppose works greatest for the scenario.

We are going to discover extra prospects within the subsequent part.

Extra on Dynamic Names and Nested Strings

On this part we’ll check out yet one more strategy to take care of dynamic names, and we will even have a look at some extra strategies to take care of nested strings. What I current on this part is extra directed to superior customers with an excellent understanding of what they’re doing, and fewer skilled readers might discover the fabric a little bit too bewildering.

If we have a look at the answer for create_db within the earlier part, it’s a little irritating that we have now so many variables/placeholders which might be derived from the enter parameter @dbname. However fairly than changing the complete values and including string delimiters and all once we use substitute, we might have placeholders contained in the string literals, one thing like this:

   SELECT @sql = 
     N'CREATE DATABASE @dbname
          ON PRIMARY (NAME = N''%(dbname)'', 
                      FILENAME = N''%(datadir)%(dbname).mdf'',
                      SIZE = %(sizeinmb) MB)
          LOG ON (NAME = N''%(dbname)_log'', 
                  FILENAME = N''%(logdir)%(dbname).ldf'',
                  SIZE = %(logsize) MB)'

In case you are saying that you just just like the earlier answer higher, I’m not going to argue with you, however faux for a second that you just really like this and dangle on, as a result of you could recognize a number of the gadgets I introduce to take care of nested strings. As for the fashion of the %(placeholder), that is simply one thing picked to have as an alternative @placeholder, however you utilize what you favor.

After we substitute the placeholders, we should think about the danger that the values embrace single quotes, so it must be one thing like this:

SELECT @sql = substitute(@sql, '%(dbname)', substitute(@dbname, '''', ''''''''))

Ouch! That was not fairly. Fortunately, we do not have to jot down code like that, however we will introduce two native variables to protect our sanity:

DECLARE @sq    nchar(1) = '''',
        @sqsq  nchar(2) = '''' + ''''

One other factor which is a little bit little bit of pressure on the attention is all of the doubled single quotes within the SQL string. This instance could also be considerably tolerable, but when your dynamic SQL has a variety of fixed string literals, you’ll be able to actually lengthy for an alternate. What do you consider this:

   SELECT @sql = 
     N'CREATE DATABASE @dbname
          ON PRIMARY (NAME = N"%(dbname)", 
                      FILENAME = N"%(datadir)%(dbname).mdf",
                      SIZE = %(sizeinmb) MB)
          LOG ON (NAME = N"%(dbname)_log", 
                  FILENAME = N"%(logdir)%(dbname).ldf",
                  SIZE = %(logsize) MB)'
   SELECT @sql = substitute(@sql, '"', @sq)

That’s, right here I exploit double quotes, after which I substitute the double quotes with single quotes as soon as I’ve constructed my dynamic SQL.

Whereas this may be extra lenient on the eyes, this can be a method you could use with some care. You can’t do that, for those who someplace concatenate values into the string fairly than utilizing placeholders. Think about for the sake of the instance, this combine:

SELECT @sql = 
  N'CREATE DATABASE @dbname
       ON PRIMARY (NAME = N"%(dbname)", 
                   FILENAME = N' + dbo.quotestring(@datadir + @dbname + '.mdf') + ',
                   SIZE = %(sizeinmb) MB)
       LOG ON (NAME = N"%(dbname)_log", 
               FILENAME = N"%(logdir)%(dbname).ldf",
               SIZE = %(logsize) MB)'
  SELECT @sql = substitute(@sql, '"', @sq)

If @datadir or @dbname would come with a double quote you possibly can get sudden outcomes, together with syntax errors. And your code could be open to SQL injection.

From this follows that for those who use double quote in its place string delimiter, you could additionally use variables/placeholders that you just substitute for all enter values. And in addition crucial: you could substitute the double quotes with single quotes earlier than you sort out any of the placeholders.

It goes with out saying that if the SQL string contains double quotes naturally (e.g., XML, full-text predicates), you would wish to discover a completely different delimiter to have the ability to use this method.

See Also

Including all these items collectively, here’s a new model of create_db:

CREATE OR ALTER PROCEDURE create_db @dbname   sysname,
                                    @sizeinmb smallint = 1200,
                                    @logsize  smallint = 200, 
                                    @debug    bit = 0 AS

   DECLARE @datadir nvarchar(128) =
                  convert(nvarchar(128), serverproperty('InstanceDefaultDataPath')),
           @logdir nvarchar(128) = 
                  convert(nvarchar(128), serverproperty('InstanceDefaultLogPath')),
           @sql   nvarchar(MAX),
           @sq    nchar(1) = '''',
           @sqsq  nchar(2) = '''' + ''''

   SELECT @sql = 
     N'CREATE DATABASE @dbname
          ON PRIMARY (NAME = N"%(dbname)", 
                      FILENAME = N"%(datadir)%(dbname).mdf",
                      SIZE = %(sizeinmb) MB)
          LOG ON (NAME = N"%(dbname)_log", 
                  FILENAME = N"%(logdir)%(dbname).ldf",
                  SIZE = %(logsize) MB)'

   SELECT @sql = substitute(@sql, '"', @sq)
   SELECT @sql = substitute(@sql, '@dbname',     quotename(@dbname))
   SELECT @sql = substitute(@sql, '%(dbname)',   substitute(@dbname, @sq, @sqsq))
   SELECT @sql = substitute(@sql, '%(datadir)',  substitute(@datadir, @sq, @sqsq))
   SELECT @sql = substitute(@sql, '%(logdir)',   substitute(@logdir, @sq, @sqsq))
   SELECT @sql = substitute(@sql, '%(sizeinmb)', convert(varchar, @sizeinmb))
   SELECT @sql = substitute(@sql, '%(logsize)',  convert(varchar, @logsize))

   IF @debug = 1 PRINT @sql
   EXEC(@sql)
go
EXEC create_db [Test'Dβ], @debug = 1
go
DROP DATABASE [Test'Dβ]

Enjoying with QUOTED_IDENTIFIER

There may be yet one more strategy to nested strings which I point out with some hesitation, because it depends on deprecated performance. Under is an excerpt from a script to gather a variety of knowledge for all consumer databases on a server. (This can be a script that I ship to clients that need me to evaluate the efficiency of their software.)

CREATE TABLE #sql (sql nvarchar(MAX) NOT NULL)
go
SET QUOTED_IDENTIFIER OFF
go
INSERT #sql (sql)
VALUES ("
INSERT tempdb..procs
SELECT db_name(), o.kind, s.title, o.title, o.object_id
FROM   sys.objects o
JOIN   sys.schemas s ON o.schema_id = s.schema_id
WHERE  o.kind IN ('P', 'PC', 'FN', 'FS')
...

INSERT tempdb..qs_runtime_stats_interval 
    SELECT db_name() AS dbname, * FROM sys.query_store_runtime_stats_interval
")
go
SET QUOTED_IDENTIFIER ON
go
DECLARE @sql nvarchar(MAX),
        @sp_executesql nvarchar(200)
SELECT @sql = sql FROM #sql

DECLARE cur CURSOR STATIC LOCAL FOR
   SELECT quotename(title) + '.sys.sp_executesql'
   FROM   sys.databases
   WHERE  database_id >= 5
     AND  state = 0
     AND  is_read_only = 0

OPEN cur

WHILE 1 = 1
BEGIN
   FETCH cur INTO @sp_executesql
   IF @@fetch_status <> 0
      BREAK

   EXEC @sp_executesql @sql
END

I make use of that with SET QUOTED_IDENTIFIER OFF, the double quote now serves as a string delimiter on equal footing with the one quote. That allows me to delimit the SQL string in double quotes, and there’s no have to double all the one quotes. Which signifies that I can simply copy the SQL code to a question window in SSMS and check. (There isn’t a dynamic half within the SQL code right here; the one factor that requires dynamic SQL is that I wish to run it in a number of databases.)

Try to be conscious that SET QUOTED_IDENTIFIER OFF is a legacy possibility, and there are a number of options in SQL Server that require this setting to be ON, for example XML kind strategies and filtered indexes to call two. To keep away from any such accidents, I’ve remoted this setting to a single batch the place I insert the SQL string right into a temp desk. And the motive for the temp desk is precisely to allow me to have this remoted batch, else I might have assigned the textual content to @sql straight.

It goes with out saying that this isn’t going to work out effectively, in case your SQL batch contains double quotes already, for example since you are coping with XML or JSON.

Additional Examples

On my web site there are a few examples of what I feel is well-crafted dynamic SQL. These examples are too lengthy to incorporate right here, not the least, as a result of it could take up an excessive amount of house to elucidate the aim of the code. So I let it suffice simply to present the hyperlinks to discover this code by yourself, if you would like inspiration for easy methods to write good dynamic SQL.

Certificates signing

I’ve already referred to my article Packaging Permissions in Stored Procedures just a few occasions on this article. This text features a saved process GrantPermsToSP and a script GrantPermsToSP_server.sql which each automate the method of making a certificates, signing a process, making a consumer/login from that certificates and granting that consumer the permissions wanted and thereby packaging the permissions into the process. The server-level script additionally copies the certificates from the consumer database to grasp. I feel each of them are good examples of dynamic SQL, and you’ll recognise a number of the strategies I’ve introduced on this chapter.

The saved process is unquestionably simpler to digest, because it operates inside a single database. The server-side script is extra advanced. It additionally contains one factor I don’t talk about elsewhere on this article: easy methods to carry out an motion on all servers in an availability group.

beta_lockinfo

beta_lockinfo is a monitoring process that returns details about the present exercise within the server: You’ll be able to see the locks, who is obstructing whom, the present assertion of every course of with the question plan and much more info. There isn’t a finish of dynamic SQL on this process. The longest piece is the place the place I translate the varied ids in sys.dm_tran_locks to names, which might be desk names, index names and many others. Right here I make use of a number of the strategies I’ve mentioned earlier on this fashion information. There may be additionally some extra esoteric stuff. I wrote the primary model of this process for SQL 6.5, and in these days there have been no grids within the question instrument, however all output was text-based. To optimise the display screen property, I added code to make the width of the columns to be dynamic (a way I stole from sp_who2). It’s nonetheless there for those who comply with the @textmode parameter.

For a most dosage of dynamic SQL, I feel the model for SQL 2012-2014 is the most effective. (As a result of that model features a half that solely runs on SQL 2014, which I dealt with with dynamic SQL fairly than making a brand new model of the process.)

Some Good Use Instances for Dynamic SQL

On this chapter we’ll have a look at some examples the place dynamic SQL is useful. Among the sections on this chapter are only a fast dialogue, possibly with pointers elsewhere, whereas others embrace precise options on easy methods to do issues. In the event you really feel that your favorite use case is lacking, drop me a line – however first learn the subsequent chapter the place I talk about conditions the place dynamic SQL is much less prone to be the suitable answer for my part. I imply, simply in case your suggestion could be coated there. 🙂

Dynamic Search Situations

A requirement that comes up in multiple software is that customers need to have the ability to search the info utilizing a variety of completely different search situations. As an example, in an order system, customers might wish to discover orders by order id, buyer title, date interval, product id and many others and infrequently together. For an information set of any dimension, it’s inconceivable to get good efficiency until you observe that you just want completely different question plans for various search situations. This may be achieved with a well-written question in static SQL, offered that you just add the question trace OPTION (RECOMPILE) on the finish of the question. Nevertheless, these options have a tendency to interrupt down, when the situations are extra advanced in nature. Additionally, if searches are very frequent, the recompilation can turn into a efficiency situation in itself. The choice is to construct the search question with dynamic SQL, the place it’s simpler so as to add extra advanced situations, and since every mixture of search situations will get its personal cached plan, this reduces the quantity of compilation significantly.

I’ve a separate article on my web site entitled Dynamic Search Conditions the place I talk about easy methods to implement searches with static SQL + OPTION (RECOMPILE) in addition to easy methods to implement such searches with dynamic SQL, and thus I can’t cowl this subject extra on this article.

Accessing Distant Knowledge Sources

We’ve already appeared a little bit bit at utilizing dynamic SQL with linked servers, and notably the case the place you need the title of linked server to be set at run-time. On this part, we’ll work with circumstances the place the title of the linked server is static. Moreover, we’ll solely have a look at queries that as such could be written with totally static SQL. However the place we should have to resort to dynamic SQL as a way to get acceptable efficiency.

The only option to get knowledge from a distant server is to discuss with tables by four-part notation, that’s server.database.schemaname.tablename. Generally this works simply advantageous, however there are additionally conditions when the optimizer decides for one motive or one other to tug a giant distant desk in its entirety over to the native server to use a WHERE situation which filters out all however a handful of rows.

When this occurs, you begin to have a look at alternate options. Closest at hand is OPENQUERY which allows you run a pass-through question on the distant knowledge supply, and thus ensures that filtering will likely be carried out remotely. This usually sends you to dynamic-SQL land, as a result of OPENQUERY requires a set SQL string as enter. You can’t go the question textual content in a variable, and nor can the question be parameterised; all parameter values have to be inlined to the question. (Precisely what I have been nagging on this article that you need to by no means do!) This may result in some fairly ugly encounters with dynamic SQL.

There’s a good motive why OPENQUERY doesn’t settle for a variable for the question: SQL Server must know the form and format of the consequence set getting back from the question at compile time. And for the reason that plan is cached, the consequence set have to be identified and stuck. Thus, the question can’t be a variable which produces outcomes with completely different shapes on completely different executions. As for why OPENQUERY doesn’t settle for parameters, I’m much less certain, however it might be that they’ve merely not come round to implement it.

The most effective answer is to keep away from OPENQUERY altogether, and as an alternative run a dynamic parameterised question and obtain the consequence right into a temp desk, which you should use in the remainder of the question. If the distant server is one other SQL Server occasion, this can be a sample we have now seen just a few occasions already.

INSERT #temp (....)
   EXEC SERVER.db.sys.sp_executesql @question, @params, @par1, @par2, ...

This can execute on the distant server within the context of the distant database, so in @question you’ll be able to discuss with tables in two-part notation; no have to specify the database title within the question. If the linked server is one thing else than SQL Server, you can not use sp_executesql, however earlier within the article we checked out EXEC() AT which additionally helps parameters.

There are two explanation why this strategy is to want over OPENQUERY:

  • The dynamic SQL is much less ugly – usually @question is a set question string.
  • You keep away from littering the plan cache on the distant server by sending completely different question strings for various parameter values. (This is applicable if the distant server is one other SQL Server occasion. It could or might not apply to different distant knowledge sources as effectively.)

However, there could also be conditions the place this answer is to not your liking and you discover that OPENQUERY works greatest for you, so let us take a look at how to do that in a correct approach. Earlier than we go on, we want a linked server to have one thing to play with. We arrange a loopback server within the part on EXEC() AT, however in case you do not have it round anymore listed below are the instructions once more:

EXEC sp_addlinkedserver LOOPBACK, '', 'SQLNCLI', @datasrc = @@servername
EXEC sp_serveroption LOOPBACK, 'rpc out', 'true'

As I mentioned earlier than, a loopback server like this allows us to run the check scripts from a single question window. In the event you want, you should use a second occasion as your linked server. Simply just remember to have NorthDynamic on that occasion.

In the event you begin utilizing OPENQUERY with out an excessive amount of thought behind it, you could find yourself writing one thing like this:

CREATE OR ALTER PROCEDURE get_remote_data @searchstr nvarchar(40) AS
DECLARE @sql nvarchar(MAX)
SELECT @sql="SELECT * FROM OPENQUERY(LOOPBACK, " +
              '''SELECT * FROM NorthDynamic.dbo.Prospects ' +
              'WHERE CompanyName LIKE N''''%' + @searchstr + '%'''''')'
PRINT @sql
EXEC sp_executesql @sql
go
EXEC get_remote_data 'Futter'

I can solely suggest you to take this path, if you have already got made a aware determination to lose your thoughts. Do you actually fancy the thought of looking at one thing like this for an entire day, making an attempt to grasp if the explanation why it doesn’t work is as a result of you may have one single quote too many or too few someplace? Or whether or not these per-cent characters ought to be elsewhere? The above does work – however I can guarantee you that I didn’t get it proper in my first try. And whereas it might be working, that is open for SQL injection – each on the distant server and the native server.

While you work with OPENQUERY, you have to take a scientific strategy once you construct your dynamic question and do it in two steps. First construct the distant question, after which embed it into the native question, as on this instance:

CREATE OR ALTER PROCEDURE get_remote_data @searchstr nvarchar(40) AS
DECLARE @remotesql nvarchar(MAX),
        @localsql  nvarchar(MAX)

SELECT @remotesql="SELECT * FROM NorthDynamic.dbo.Prospects
      WHERE CompanyName LIKE " + dbo.quotestring_n('%' + @searchstr + '%')
SELECT @localsql="SELECT * FROM OPENQUERY(LOOPBACK, " + 
                    dbo.quotestring_n(@remotesql) + ')'
PRINT @localsql
EXEC sp_executesql @localsql
go
EXEC get_remote_data 'snabb'

Word that we should use quotestring_n right here for @remotesql, since most certainly it is going to exceed 128 characters, so quotename just isn’t a viable possibility. We might use quotename for @searchstr since it’s at most 40 characters, however it appears logical to make use of the identical for each. I wish to level out that utilizing quotestring or quotestring_n just isn’t solely a matter of being fairly. The string literal handed to OPENQUERY have to be nvarchar, or else @searchstr might be used for SQL injection on the distant server just like the trick Joe Cool performed within the part The Importance of nvarchar.

We are going to now transfer on to a different scenario the place efficiency could be a problem.

SELECT ...
FROM   server.db.dbo.bigremotetbl r
JOIN   dbo.tinylocaltbl l ON r.somecol = l.somecol

Because the names counsel, bigremotetbl is a large desk, with tens of millions of rows or extra, whereas tinylocaltbl has only a handful. For good efficiency you need the optimizer to accept a plan the place the contents of tinylocaltbl.somecol is shipped over to the distant server, and the a part of the question in opposition to bigremotetbl is evaluated there. The optimizer is actually in a position to produce such a plan, however for one motive or one other, it might get the thought of bringing over all rows in bigremotetbl to the native server. I must admit that I’m not inclined to take my probabilities that it’s going to get it proper. Once I encounter a scenario like this, I fairly go the info within the native desk to the distant server and run a question there as a catastrophe prevention.

My most well-liked methodology is to package deal the info within the native desk in an XML string which I go to the distant server with sp_executesql. Within the distant question, I shred the XML knowledge into a neighborhood temp desk which I then be part of to the principle desk on the distant server. Right here is an instance the place the temp desk #getthese corresponds to tinylocaltbl whereas Prospects in NorthDynamic serves as a token bigremotetbl.

CREATE TABLE #getthese (custid nchar(5) NOT NULL PRIMARY KEY)
INSERT #getthese(custid)
   VALUES(N'ALFKI'), (N'BERGS'), (N'VINET')

go
DECLARE @xmlstr nvarchar(MAX),
        @question  nvarchar(MAX)
SELECT @xmlstr = (SELECT custid FROM #getthese FOR XML RAW, ROOT('root'))

SELECT @question = '
   DECLARE @xml xml = solid(@xmlstr AS xml)

   CREATE TABLE #xmldata (custid nchar(5) NOT NULL PRIMARY KEY)

   INSERT #xmldata(custid)
      SELECT T.c.worth(''@custid'', ''nchar(5)'')
      FROM   @xml.nodes(''/root/row'') AS T(c)

   SELECT C.CustomerID, C.CompanyName, C.ContactName, C.Metropolis
   FROM   dbo.Prospects C
   WHERE  EXISTS (SELECT * FROM #xmldata x WHERE C.CustomerID = x.custid)'
  
EXEC LOOPBACK.NorthDynamic.sys.sp_executesql @question, N'@xmlstr nvarchar(MAX)', @xmlstr

When forming the XML string, I exploit FOR XML RAW, which is the best to make use of for this goal. It’s possible you’ll notice that the info kind of @xmlstr is nvarchar and never xml. It’s because, as we have now famous just a few occasions already, the xml knowledge kind just isn’t supported in calls to linked servers. As an alternative, I solid @xmlstr to xml on the distant aspect, in order that I can shred it into #xmldata.

It’s possible you’ll ask whether or not the temp desk on the distant server is required; could not we use @xml.nodes straight? Certainly, we will, however then the optimizer wouldn’t have any clue in regards to the knowledge, and make a blind assumption which might result in poor efficiency. For that reason, it’s typically a good suggestion to shred XML knowledge right into a temp desk, in order that the optimizer will get some statistics to work from.

The reader might discover this overly difficult and ask if we couldn’t use a table-valued parameter as an alternative. The reply is similar as for the xml knowledge kind: not supported with linked servers. What’s a viable various on SQL 2016 and later is to make use of JSON as an alternative for those who so fancy. Since JSON is dealt with as nvarchar in SQL Server anyway, this removes the preliminary solid in @question above.

An alternate is to make use of OPENQUERY and remodel the native temp desk to an IN checklist:

DECLARE @getlist      nvarchar(MAX),
        @remotequery  nvarchar(MAX),
        @localquery   nvarchar(MAX)

SELECT @getlist = 
   (SELECT dbo.quotestring_n(custid) + ', '
    FROM   #getthese
    FOR XML PATH(''), TYPE).worth('.', 'nvarchar(MAX)')
SELECT @getlist = substring(@getlist, 1, len(@getlist) - 1)  

-- SELECT @getlist = string_agg(dbo.quotestring_n(custid), ',') FROM #getthese

SELECT @remotequery = 
    N'SELECT C.CustomerID, C.CompanyName, C.ContactName, C.Metropolis
      FROM   NorthDynamic.dbo.Prospects C
      WHERE  C.CustomerID IN (' + @getlist + ')'

SELECT @localquery = 
   N'SELECT CustomerID, CompanyName, ContactName, Metropolis
     FROM   OPENQUERY(LOOPBACK, ' + dbo.quotestring_n(@remotequery) + ')'

PRINT @localquery
EXEC sp_executesql @localquery

The comma-separated checklist is saved into @getlist, which we assemble with assist of FOR XML PATH. No, we aren’t actually utilizing XML right here, that is simply this humorous option to construct a string from a consequence set which we have now seen just a few occasions already on this article. On SQL 2017 and later, we will use the extra easy string_agg as an alternative, right here commented out. As soon as we have now @getlist, we construct the question in two steps as we did above: we first construct @remotequery, after which we embed it into @localquery.

I’m much less keen on this answer. It causes litter within the plan cache on the distant server, for the reason that values in #getlist turn into constants within the distant question. Moreover, the IN operator is simply helpful when there’s a single key column. With EXISTS you can too deal with multi-key situations. Then once more, the answer with OPENQUERY ought to work with about any distant knowledge supply, whereas the answer with sending over XML must be reworked in case your linked server is one thing else than SQL Server.

Organising Synonyms for Cross-Database Entry

You might have a scenario the place two databases on a server have to reference one another for one motive or one other. In the event you hardcode references with three-part notation like otherdb.dbo.tbl, you get an issue if you wish to arrange a second setting for check or no matter on the identical occasion. If all cross-references are by process calls, you’ll be able to construct process names dynamically:

SELECT @otherdb = otherdb FROM configtable
SELECT @spname = @otherdb + '.dbo.some_sp'
EXEC @spname

However that’s much less palatable for those who additionally wish to make references to views or tables within the different database, since you’d want to make use of dynamic SQL to get the database title into the question. Fortunately, there is no such thing as a want to do that, since you should use synonyms. As an example:

CREATE SYNONYM otherdb.thattable FOR MyOtherDB.dbo.thattable

Now you can question otherdb.thattable as if it had been a neighborhood desk, however the question will really entry thattable in MyOtherDB. On this instance I’ve positioned the synonym in a schema which is particular for references to the opposite database. There isn’t a requirement to do that, however this works too:

CREATE SYNONYM dbo.thattable FOR MyOtherDB.dbo.thattable

Nevertheless, having a separate schema lets you see what the title refers to. The schema additionally makes it simpler to manage the synonyms with the procedures I current on this part. To wit, you probably have a variety of objects you have to make references to, you want a variety of synonyms, and sustaining this manually just isn’t actually sensible. This can be a place the place dynamic SQL is useful, and I’ll current two saved procedures, setup_synonyms and retarget_synonyms.

setup_synonyms units up synonyms for all tables, views and saved procedures in @otherdb and places the synonyms within the schema @synschema. The checklist of object varieties is hardcoded, however you’ll be able to modify the checklist to suit your wants. All present synonyms in @synschema are dropped earlier than the brand new ones are created, so you should use setup_synonyms for an preliminary setup in addition to for a refresh when new objects have been added to the opposite database. You too can use it to re-target the synonyms to discuss with a distinct database, for example when you may have restored a set of databases with new names on the identical occasion. The statements to drop and create synonyms are inside a transaction, in order that if one thing goes improper, every part is rolled again. Be at liberty to adapt the process to suit your wants.

CREATE OR ALTER PROCEDURE setup_synonyms @synschema sysname, @otherdb sysname AS
BEGIN TRY
   DECLARE @sp_executesql nvarchar(600),
           @dropsql       nvarchar(MAX),
           @createquery   nvarchar(MAX),
           @createsql     nvarchar(MAX)

   -- Validate enter parameters.
   IF schema_id(@synschema) IS NULL
      RAISERROR('Schema "%s" doesn't exist.', 16, 1, @synschema)
   
   IF db_id(@otherdb) IS NULL
      RAISERROR('Database "%s" doesn't exist.', 16, 1, @otherdb)

   -- Arrange reference for sp_executesql.
   SELECT @sp_executesql =  quotename(@otherdb) + '.sys.sp_executesql'

   -- Run a question to generate the SQL to drop present synonyms.
   SELECT @dropsql = 
      (SELECT 'DROP SYNONYM ' + quotename(@synschema) + '.' + quotename(title) +
              char(13) + char(10)
       FROM   sys.synonyms
       WHERE  schema_id = schema_id(@synschema)
       FOR XML PATH(''), TYPE).worth('.', 'nvarchar(MAX)')

   -- Arrange a question to create the brand new synonyms. That is dynamic SQL, since
   -- runs within the different database, of which the title is a parameter.
   SELECT @createquery = 'SELECT @sql = 
                       (SELECT '' CREATE SYNONYM '' + 
                               quotename(@schema) + ''.'' + quotename(o.title) +  
                               '' FOR '' + quotename(db_name()) + ''.'' + 
                               quotename(s.title) + ''.'' + quotename(o.title) + 
                               char(13) + char(10)
                       FROM  sys.objects o
                       JOIN  sys.schemas s ON o.schema_id = s.schema_id
                       WHERE o.kind IN (''P'', ''U'', ''V'')
                       FOR XML PATH(''''), TYPE).worth(''.'', ''nvarchar(MAX)'')'

    -- Run question to get the statements.
    EXEC @sp_executesql @createquery, N'@schema sysname, @sql nvarchar(MAX) OUTPUT',
                                      @synschema, @createsql OUTPUT

    -- Print and run the statements inside a transaction
    BEGIN TRANSACTION
    PRINT @dropsql
    EXEC (@dropsql)
    PRINT @createsql
    EXEC (@createsql)
    COMMIT TRANSACTION
END TRY
BEGIN CATCH
    IF @@trancount > 0 ROLLBACK TRANSACTION
    ; THROW
END CATCH

Word: the CATCH handler makes use of the ;THROW assertion that was launched in SQL 2012. For a fast check, you’ll be able to substitute ;THROW with a RAISERROR assertion, if you’re on SQL 2008 and earlier. For extra details about error dealing with, see Half One among my collection Error and Transaction Handling in SQL Server.

There could also be conditions the place you do not wish to create synonyms for each object within the different database, for example, as a result of builders are solely supposed to make use of formally outlined interface objects. If that’s the case, you’d outline your synonyms in a script and never use setup_synonyms. Nevertheless, you should still have the necessity to redefine all synonyms to discuss with a distinct database, and to this finish you’d use retarget_synonyms. It updates all present synonyms in @synschema by changing the prevailing database title with @otherdb. As with setup_synonyms, all work is completed inside a transaction.

CREATE OR ALTER PROCEDURE retarget_synonyms @synschema sysname, @otherdb sysname AS
BEGIN TRY
   DECLARE @sql  nvarchar(MAX)

   -- Validate enter parameters.
   IF schema_id(@synschema) IS NULL
      RAISERROR('Schema "%s" doesn't exist.', 16, 1, @synschema)
   
   IF db_id(@otherdb) IS NULL
      RAISERROR('Database "%s" doesn't exist.', 16, 1, @otherdb)

   -- Run a question to generate the SQL to drop present synonyms.
   SELECT @sql = 
      (SELECT 'DROP SYNONYM ' + quotename(@synschema) + '.' + quotename(title) +
              char(13) + char(10) +
              'CREATE SYNONYM ' + quotename(@synschema) + '.' + quotename(title) +
                 ' FOR ' + quotename(@otherdb) + '.' + 
                           quotename(parsename(base_object_name, 2)) + '.' + 
                           quotename(parsename(base_object_name, 1)) +
              char(13) + char(10)
       FROM   sys.synonyms
       WHERE  schema_id = schema_id(@synschema)
       FOR XML PATH(''), TYPE).worth('.', 'nvarchar(MAX)')

    -- Print and run the statements inside a transaction.
    BEGIN TRANSACTION
    PRINT @sql
    EXEC (@sql)
    COMMIT TRANSACTION
END TRY
BEGIN CATCH
    IF @@trancount > 0 ROLLBACK TRANSACTION
    ; THROW
END CATCH

I depart these procedures uncommented from the attitude of dynamic SQL, as they construct on ideas we have already got mentioned, however you might be welcome to check the rules that I make use of. Like in earlier examples the place we have now used FOR XML PATH to construct a listing of values, you’ll be able to as an alternative use string_agg on SQL 2017 or later to make the code less complicated

The opposite database might exist on one other SQL Server occasion. You should utilize synonyms on this case as effectively like on this instance:

CREATE SYNONYM otherdb.sometbl FOR SERVER.otherdb.dbo.sometbl

Modifying these procedures to help an elective @server parameter is pretty easy. You will want to alter @createquery in setup_synonyms in order that the @sql code is returned in consequence set which you seize with INSERT-EXEC, since SQL Server doesn’t help MAX varieties for OUTPUT parameters.

BULK INSERT and OPENROWSET(BULK)

If you wish to load knowledge from a file with BULK INSERT or OPENROWSET(BULK), you usually end up utilizing dynamic SQL, as a result of they require that the names of all information within the command (knowledge file, format file, error file) to be string literals, however fairly often the file title(s) are given at run-time. Since you might be working with strings, the overall caveats apply. That’s, use certainly one of quotename or quotestring. The latter is safer, since a file path actually could be greater than 128 characters lengthy.

Here’s a pattern with BULK INSERT:

DECLARE @datafile nvarchar(250) = 'C:tempmydata.bcp',
        @formatfile nvarchar(250) = 'C:tempmydata.fmt',
        @sql nvarchar(MAX)
                                                            
SELECT @sql="BULK INSERT mytable FROM " + dbo.quotestring_n(@datafile) + '
               WITH (FORMATFILE = ' + dbo.quotestring_n(@formatfile) + ')'
PRINT @sql
EXEC(@sql)

The generated command is:

BULK INSERT mytable FROM N'C:tempmydata.bcp'
               WITH (FORMATFILE = N'C.tempmydata.fmt')

I am not taking on house right here with a pattern desk and pattern information, so until you create ones your self, the command will fail with an error that the desk or file is lacking once you run it.

Right here is an instance with OPENROWSET(BULK):

DECLARE @datafile nvarchar(250) = 'C:tempmydata.bcp',
        @formatfile nvarchar(250) = 'C:tempmydata.fmt',
        @sql nvarchar(MAX)

SELECT @sql="INSERT mytable (a, b, c)
      SELECT a, b, c FROM OPENROWSET(BULK " + dbo.quotestring_n(@datafile) + ',
      FORMATFILE = ' + dbo.quotestring_n(@formatfile) + ') AS t'
PRINT @sql
EXEC(@sql)

One might argue that from the attitude of dynamic SQL, that it could be cleaner to have the INSERT exterior of the dynamic SQL and use INSERT-EXEC. Nevertheless, there are some optimisations once you say INSERTSELECTFROM OPENROWSET(BULK) which you lose for those who put the INSERT exterior the dynamic SQL. Likewise, there are some hints to the INSERT command which might be solely relevant to OPENROWSET(BULK). So because of this, it’s higher to maintain the INSERT contained in the dynamic SQL.

Upkeep Duties in Basic

There are a variety of DBA duties that may be automated with the assistance of dynamic SQL. Listed here are just a few examples:

  • Altering a variety of logins to allow/disable password insurance policies.
  • Creating logins and customers from a desk you may have imported from an Excel file.
  • Including a DDL set off to many databases to implement some coverage.
  • Including commonplace auditing columns to tables that should not have them.
  • A script you run in upkeep home windows to extend the scale of information information which have lower than 10 % of free house.

And there’s no finish to it. So long as you play the DBA and you aren’t writing software code, dynamic SQL is nearly all the time a good recreation. Just a few caveats:

  • Do not miss the part on BACKUP/RESTORE within the subsequent chapter.
  • In case you are contemplating automating index and statistics upkeep, first go to http://ola.hallengren.com and have a look at his answer which is kind of the de-facto commonplace within the SQL Server world. Ola has already executed the give you the results you want.
  • And whereas dynamic SQL in T‑SQL could be a actually good hammer, it isn’t all the time the suitable instrument. Significantly, if you wish to do one thing on many servers, you need to actually be utilizing Powershell. And regardless of if it’s a process for a single server or many, it’s all the time a good suggestion to take a look at whether or not the great individuals at dbatools.io have one thing for you.

Dynamic Pivot – Making Rows into Columns

The upshot for this drawback is that you’ve a dataset with two or extra keys, for example gross sales per product and worker. You wish to current the info as a matrix with one column per worker. You do not wish to hardcode the staff, but when a brand new worker joins the gross sales division, the output ought to routinely embrace that worker.

Let’s get this clear from the beginning: this can be a non-relational operation. A question in a relational database returns a desk with a set variety of columns, the place every column represents a definite attribute within the dataset being returned. A dynamic pivot is the antithesis of this. Or extra to the purpose: it’s a presentational gadget. And logically, the most effective place for such gadgets is within the presentation layer and you need to count on an excellent report author to offer such performance. To say just a few, I do know that in SSRS the Tablix report can do it, and I get the impression that in Energy BI you should use Matrix Visible. Excel has had pivoting help for a very long time. In case you are receiving the info in, say, a .NET program, you’ll be able to simply pivot the info in C#.

I might have stopped right here and put this part into the subsequent chapter for instance of what you shouldn’t use dynamic SQL for. Nevertheless, there should be conditions the place you favor to implement the pivoting in T‑SQL. For a really apparent instance: you’re a DBA, and your presentation layer is SSMS. On this case there is no such thing as a different possibility. And even you probably have a shopper program, you would possibly discover it extra handy to do the pivoting in T‑SQL. Moreover, I see questions virtually every day within the SQL boards that I monitor that ask for a dynamic pivot. (By no means thoughts that these questions are sometimes from inexperienced customers who in lots of circumstances could be higher off exploring client-side choices.) So for these causes, I made a decision to jot down this part to indicate easy methods to implement a dynamic pivot in a structured approach.

Word: you might have been referred to this part straight in response to a discussion board query. In case you are desirous to discover a answer to your drawback at hand, you’ll be able to go on studying. However be warned that I’ve written this part below the belief that you’ve learn at the least chapters 2 and three, so for those who do not actually grasp what I am speaking about, go to the highest of the article, and are available again right here when you may have accomplished these chapters. If you wish to play with the examples, you discover details about the demo database NorthDynamic and relevant SQL variations within the quick part Applicable SQL Versions and Demo Database to start with of the article.

As all the time once we work with dynamic SQL, we must always first implement a static answer, in order that we all know the syntax of what we’re going to generate. As an instance that we wish to see how completely different merchandise promote within the 5 most necessary nations for NorthDynamic Buying and selling: Brazil, France, Germany, UK and USA. Some readers might at this level count on to see an answer utilizing the PIVOT operator. Nevertheless, I discover this operator to be of little worth. In truth, so little worth, that I’ve not even bothered to study it. As an alternative, we’ll use one thing I name CASE filters inside an mixture operate. Here’s a question on this theme:

SELECT P.ProductName, 
       SUM(CASE C.Nation WHEN 'Brazil'  THEN OD.Quantity END) AS Brazil,
       SUM(CASE C.Nation WHEN 'France'  THEN OD.Quantity END) AS France,
       SUM(CASE C.Nation WHEN 'Germany' THEN OD.Quantity END) AS Germany,
       SUM(CASE C.Nation WHEN 'UK'      THEN OD.Quantity END) AS UK,
       SUM(CASE C.Nation WHEN 'USA'     THEN OD.Quantity END) AS USA
FROM   dbo.Orders O
JOIN   dbo.[Order Details] OD ON O.OrderID = OD.OrderID
JOIN   dbo.Prospects C ON C.CustomerID = O.CustomerID
JOIN   dbo.Merchandise P ON P.ProductID = OD.ProductID
WHERE  C.Nation IN ('Brazil', 'France', 'Germany', 'USA', 'UK')
GROUP  BY P.ProductName
ORDER  BY P.ProductName

As you see, we have now 5 calls to the SUM mixture operate, one for every nation, and inside these we have now a CASE expression with a single WHEN clause and no ELSE. The half about no ELSE is necessary. For SUM, it could work with including ELSE 0, however that will not work with different mixture features as a result of it could yield incorrect outcomes. It’s these single-alternative CASE expressions I discuss with as CASE filters.

Word: for those who did not instantly get what I meant with incorrect outcomes, however wish to know: Substitute SUM with AVG and run the question with and with out ELSE 0 to see the distinction. With ELSE 0 within the question, all orders rely, not solely these from Brazil, France and many others, so the averages are approach too low.

Readers who’re accustomed to the PIVOT operator might marvel why I favour the above. True, it’s actually a little bit extra verbose than a question utilizing PIVOT. Nevertheless, once we generate the question with dynamic SQL, it will take away the repetitive nature of the code, as you will notice later. What’s way more necessary is that the scheme above may be very versatile. Say that we wish two extra columns: one for all different nations and a grand whole per product. That may be very easy, simply take away the WHERE clause, and add yet one more filtered SUM and one unfiltered:

SELECT P.ProductName, 
       SUM(CASE C.Nation WHEN 'Brazil'  THEN OD.Quantity END) AS Brazil,
       SUM(CASE C.Nation WHEN 'France'  THEN OD.Quantity END) AS France,
       SUM(CASE C.Nation WHEN 'Germany' THEN OD.Quantity END) AS Germany,
       SUM(CASE C.Nation WHEN 'UK'      THEN OD.Quantity END) AS UK,
       SUM(CASE C.Nation WHEN 'USA'     THEN OD.Quantity END) AS USA,
       SUM(CASE WHEN C.Nation NOT IN ('Brazil', 'France', 'Germany', 'USA', 'UK')
                THEN OD.Quantity
           END) AS Others,
       SUM(OD.Quantity) AS [Grand Total]
FROM   dbo.Orders O
JOIN   dbo.[Order Details] OD ON O.OrderID = OD.OrderID
JOIN   dbo.Prospects C ON C.CustomerID = O.CustomerID
JOIN   dbo.Merchandise P ON P.ProductID = OD.ProductID
GROUP  BY GROUPING SETS (P.ProductName, ())
ORDER  BY GROUPING(P.ProductName), P.ProductName

Word: Whereas I used to be at it, I additionally added a grand whole per column with assist of the GROUPING SETS key phrase. Nevertheless, since this has no relation to dynamic SQL, I depart this unexplained and refer you to Books Online if you wish to know extra. This syntax was launched in SQL 2008 and doesn’t run on SQL 2005.

Earlier than we dive into producing pivot queries dynamically, we’ll have a look at yet one more instance. A pivot question all the time contains mixture features with CASE filters, and generally the mixture is available in naturally, since you are in search of a sum, a median and many others. However often, you may have a set of information with a two-column key that you just wish to current in matrix type with none aggregation. For instance we will take the view ProductCountrySales in NorthDynamic:

CREATE VIEW ProductCountrySales AS
   SELECT P.ProductName, C.Nation, SUM(OD.Quantity) AS TotAmount
   FROM   dbo.Orders O
   JOIN   dbo.[Order Details] OD ON O.OrderID = OD.OrderID
   JOIN   dbo.Prospects C ON C.CustomerID = O.CustomerID
   JOIN   dbo.Merchandise P ON P.ProductID = OD.ProductID
   GROUP  BY P.ProductName, C.Nation

We like to make use of this view to return the identical knowledge as within the first instance fairly than utilizing the bottom tables. However for the reason that SUM is contained in the view, there is no such thing as a obvious have to carry out any aggregation. Nonetheless, we want an mixture operate, as a result of that’s the solely option to get knowledge for a similar product on a single line. The trick is to make use of MIN or MAX:

SELECT ProductName, 
       MIN(CASE Nation WHEN 'Brazil'  THEN TotAmount END) AS Brazil,
       MIN(CASE Nation WHEN 'France'  THEN TotAmount END) AS France,
       MIN(CASE Nation WHEN 'Germany' THEN TotAmount END) AS Germany,
       MIN(CASE Nation WHEN 'UK'      THEN TotAmount END) AS UK,
       MIN(CASE Nation WHEN 'USA'     THEN TotAmount END) AS USA
FROM   dbo.ProductCountrySales
WHERE  Nation IN ('Brazil', 'France', 'Germany', 'USA', 'UK')
GROUP  BY ProductName

The rationale this works is as a result of the CASE filters be sure that each MIN (or MAX) solely sees one single non-NULL worth.

Let’s now transfer over to generate pivot queries dynamically, in order that we will get any variety of columns in our pivot. A saved process that generates and runs a dynamic pivot usually consists of six steps:

  1. Figuring out the values that can outline the columns within the question, and saving these right into a temp desk.
  2. Creating the preliminary a part of the question, that’s SELECT + preliminary columns.
  3. Producing all of the required CASE-filtered calls to the mixture operate with assist of the info within the temp desk and including these to the question string.
  4. Including any ultimate columns within the desired consequence set to the question string.
  5. Including the remainder of the question: FROM, JOIN, WHERE, GROUP BY and ORDER BY clauses.
  6. Working the generated question.

We are going to undergo these steps with the process ProductCountrySales_sp. The purpose of this process is to return gross sales per product and nation throughout a interval, solely together with product and nations that truly had gross sales throughout this era. That’s, if no buyer from, say, Belgium, purchased something through the interval, Belgium shouldn’t be included within the consequence set. The nation columns ought to be sorted first alphabetically by continent, after which by nation throughout the continent. On this first model, we don’t embrace any subtotals per continent, however we’ll have a look at this on the finish of the part.

We are going to undergo this process step-by-step, though not within the order above, however fairly the order you need to comply with once you implement an answer for a dynamic pivot. We begin with an empty shell, with solely step 6 included:

CREATE OR ALTER PROCEDURE ProductCountrySales_sp @fromdate date,
                                                 @todate   date,
                                                 @debug    bit = 0 AS
BEGIN TRY
   DECLARE @lineend char(3) = ',' + char(13) + char(10),
           @sql     nvarchar(MAX)
    
   -- Extra code will likely be added as we go on.

   IF @debug = 1
      PRINT @sql
   EXEC sp_executesql @sql, N'@fromdate date, @todate date', @fromdate, @todate
END TRY
BEGIN CATCH
   IF @@trancount > 0 ROLLBACK TRANSACTION
   ; THROW
END CATCH

The process takes three parameters. Beside @fromdate and @todate dictated by the specified performance, there’s additionally a @debug parameter. As I’ve identified just a few occasions, this parameter is necessary once you work with dynamic SQL so that you could examine the dynamic SQL if issues go improper. The final assertion within the process is to run the dynamic SQL that we are going to study to generate within the textual content that follows. We are able to see that this SQL string may have @fromdate and @todate as parameters; we’ll after all not embed the values of those parameters into the SQL string.

The logic of the process is wrapped in TRY-CATCH. This has nothing to do with dynamic SQL, however I am together with it for the sake of greatest observe. If you wish to study extra about error dealing with and why the CATCH block appears to be like prefer it does, see Half One in my collection Error and Transaction Handling in SQL Server.

There are two native variables. As you would possibly guess, @sql is the place we’ll construct our question. @lineend is akin to @nl that I launched within the part Spacing and Formatting, however along with the CR-LF, there’s additionally a comma. For the reason that strains we generate will finish in a comma, it makes good sense to have the comma within the variable. That is additionally on account of a restriction with the string_agg operate that I’ll return to.

Let’s now have a look at step 1. On this step we arrange the #pivotcols desk. (Whereas you should use any title, I am utilizing this title as a normal title on this textual content). Logically, this desk must have three columns:

  1. One which defines the column order, usually referred to as colno. That is usually an int column.
  2. One which specifies the values for the CASE filters, usually referred to as filterval. The info kind for this column ought to match the supply column it’s taken from.
  3. One which defines the names of the columns within the consequence set of the generated question, usually referred to as colname. The info kind of this column is all the time sysname. (As a result of it’s a metadata title.)

The values in all three columns are all the time distinctive.

Generally you should use the identical column for multiple goal. On this specific instance, filterval and colname would have the identical values, so we skip filterval and use colname in each locations. Had the necessities requested for a strict alphabetic checklist of the output columns, we might even have skipped colno.

That is how we outline and fill #pivotcols for ProductCountrySales_sp:

    CREATE TABLE #pivotcols (colno   int      NOT NULL PRIMARY KEY,
                             colname sysname  NOT NULL UNIQUE)
    INSERT #pivotcols (colno, colname)
       SELECT row_number() OVER(ORDER BY Cou.Continent, Cou.Nation), Cou.Nation
       FROM   dbo.Nations Cou
       WHERE  EXISTS (SELECT *
                      FROM   dbo.Prospects C
                      JOIN   dbo.Orders O ON O.CustomerID = C.CustomerID
                      WHERE  C.Nation = Cou.Nation
                        AND  O.OrderDate BETWEEN @fromdate AND @todate)

I think that many readers would have written this as a be part of over all three tables, however in such case you would need to throw in a DISTINCT to weed out duplicates. Utilizing EXISTS is a cleaner option to obtain the purpose. We’re in truth asking for nations that had gross sales within the interval.

We now leap to step 5, the half the place we arrange the physique of the question after the SELECT checklist. It’s a good suggestion to jot down this half earlier than you do steps 2, 3 and 4, as a result of it’s right here you establish the choice logic of the question, and it’s right here the place you outline the aliases you’ll use within the previous steps. For this instance, this step just isn’t notably difficult:

   SELECT @sql += ' 
     FROM   dbo.Orders O
     JOIN   dbo.[Order Details] OD ON O.OrderID = OD.OrderID
     JOIN   dbo.Prospects C ON C.CustomerID = O.CustomerID
     JOIN   dbo.Merchandise P ON P.ProductID = OD.ProductID
     WHERE  O.OrderDate BETWEEN @fromdate AND @todate
     GROUP  BY P.ProductName
     ORDER  BY P.ProductName'

Word that we’re including to the @sql variable with the += operator. (Recall the caveat about utilizing this operator if the string you append consists of a number of lengthy strings.)

Now that we have now the aliases, we will return to step 2, which on this instance is a one-liner:

   SELECT @sql="SELECT P.ProductName " + @lineend

ProductName is the one column to come back earlier than the pivot columns.

Step 3 is the centrepiece of a dynamic pivot from the attitude of dynamic SQL. This the place we construct the SUM/CASE strains from the contents of #pivotcols. Let’s first have a look at easy methods to construct a single line, earlier than we have a look at concatenating them collectively into the @sql variable, in order that we aren’t making an attempt to do too many issues on the identical time. What we’re is to construct a line like this:

 SUM(CASE Nation WHEN 'Brazil' THEN TotAmount END) AS Brazil,

There are two occurrences of Brazil right here. One is a string literal, and the second is a column alias. We’ve learnt that we must always all the time use variables the place that is syntactically potential, so we might imagine that we must always use a variable instead of the string literal. Nevertheless, this fails us right here, as a result of parameters solely work when we have now a set variety of them, and we do not know prematurely what number of we’ll want. Thus, we have to inline each the string literal and the column alias (which by no means could be a variable anyway.) After we do that, we have to take the same old precautions to stop SQL injection and keep away from syntactic accidents when the nation title contains particular characters. We can also wish to add some indentation in order that the debug output is extra readable. This results in one thing like this:

SELECT concat(house(7), ' SUM(CASE C.Nation WHEN ', quotename(colname, ''''),
              ' THEN OD.Quantity END) AS ', quotename(colname), @lineend)
FROM   #pivotcols

It’s possible you’ll notice that I exploit the concat operate fairly than utilizing the + operator. It does probably not matter right here, however you probably have a separate filterval column in #pivotcols this can be an integer or a date worth. In that case, concat is extra sensible as a result of all its inputs are implicitly transformed to strings, so the code doesn’t must be suffering from an express kind solid. The decision house(7) provides seven areas of indentation.

Word: The concat operate was launched in SQL 2012, so if you’re nonetheless on SQL 2008 or earlier, it’s good to use the + operator as an alternative.

In each locations the place colname seems, it’s wrapped in quotename in order that we will deal with nation names equivalent to Costa Rica and Côte d’Ivoire with out syntax errors. Since colname is of kind sysname, we do not have to fret that the enter worth to quotename is just too lengthy and can lead to NULL being returned. We additionally add @lineend to the string to get the ultimate comma and a line-break for fairly output.

To have any use for these strains, we have to concatenate them right into a single string. We’ve executed this just a few occasions already on this article, however nowhere it’s as important as with dynamic pivot, so let’s take a little bit nearer have a look at how to do that. There are two choices: the extra easy operate string_agg, which is obtainable in SQL 2017 and later, and the less-than-intuitive FOR XML PATH, obtainable since SQL 2005. Each of them have some peculiarities that decision for some modification to the SELECT above. Right here is the entire code for step 3 with string_agg:

   ; WITH sumcaselines AS (
      SELECT colno, 
             convert(nvarchar(MAX), 
                concat(house(7), ' SUM(CASE C.Nation WHEN N', quotename(colname, ''''),
                       ' THEN OD.Quantity END) AS ', quotename(colname))) AS sumcase
      FROM   #pivotcols
   )   
   SELECT @sql += string_agg(sumcase, @lineend) WITHIN GROUP (ORDER BY colno)
   FROM   sumcaselines

The rationale I exploit a CTE right here is just to have the string concatenation separated from the decision to string_agg, however for those who want to place the concatenation inside string_agg, you are able to do so. It’s possible you’ll observe that I’ve modified the SELECT above in two regards. The primary is that I’ve wrapped the concat operation in a convert to nvarchar(MAX). In case your pivot has many columns, it might be that the whole size of the concatenation exceeds 8000 bytes, which signifies that the consequence that comes out of string_agg have to be nvarchar(MAX). Nevertheless, the output kind from string_agg is similar as the kind of the enter, so if the enter is nvarchar(n), you can not get greater than 4000 characters again. Thus, as a way to get nvarchar(MAX) again from string_agg, it’s good to ship nvarchar(MAX) in. (Fortunately, for those who overlook the convert and the result’s greater than 8000 bytes, string_agg raises an error and doesn’t truncate the string silently.) The opposite modification is that @lineend is absent from the principle concatenation, however as an alternative it handed because the second argument to string_agg, that’s, the delimiter for the concatenation. (This argument have to be a continuing string literal or a variable. It can’t be an expression equivalent to ',' + @nl. Which is one motive why @lineend contains the comma)

Additionally take note of the WITHIN GROUP clause which we have now not seen in our earlier encounters with string_agg. This clause permits us specify the order throughout the concatenated string, and the order is already outlined by the colno column in #pivotcols, so we use that column.

That is the answer with FOR XML PATH:

   SELECT @sql +=
       (SELECT concat(house(7), ' SUM(CASE C.Nation WHEN ', quotename(colname, ''''), 
                      ' THEN OD.Quantity END) AS ', quotename(colname), @lineend)
        FROM   #pivotcols
        ORDER  BY colno
        FOR XML PATH(''), TYPE).worth('.', 'nvarchar(MAX)')

   SELECT @sql = substring(@sql, 1, len(@sql) - len(@lineend))

string_agg has a particular place for the delimiter between the strings and understands that it shouldn’t be added after the final string. FOR XML PATH is primarily designed for a distinct goal, so it doesn’t have this functionality. For that reason, we have now so as to add a second assertion to take away the ultimate @lineend, since in any other case that comma would give us syntax errors. For the ordering on nation title, we use an everyday ORDER BY clause. There isn’t a want right here to solid to nvarchar(MAX) right here, however that is executed implicitly on this case.

It’s possible you’ll marvel what all these humorous issues on the final line actually imply and why you want them. However it’s too convoluted to elucidate on this context, so I’ll depart you at nighttime. You will want to belief me that this works. And, sure, you want all that gobbledygook. However it all the time appears to be like the identical, so you’ll be able to simply copy and paste it once you want it.

Word: Many individuals who publish examples with FOR XML PATH, wrap the FOR XML question within the stuff operate to take away the trailing delimiter. Nevertheless, I feel this makes the code troublesome to learn, which is why I want to take away @lineend in a separate assertion.

The final step to have a look at is step 4 which may be very easy on this first instance: since we have no further columns, it is empty. We are going to return to this step within the subsequent instance.

Right here is the entire code for ProductCountrySales_sp with the 2 alternate options of step 3 inside frames. That’s, you’d solely embrace certainly one of these frames once you create the process.

CREATE OR ALTER PROCEDURE ProductCountrySales_sp @fromdate date,
                                                 @todate   date,
                                                 @debug    bit = 0 AS
BEGIN TRY
   DECLARE @lineend char(3) = ',' + char(13) + char(10),
           @sql     nvarchar(MAX)

   CREATE TABLE #pivotcols (colno   int      NOT NULL PRIMARY KEY,
                            colname sysname  NOT NULL UNIQUE)
   INSERT #pivotcols (colno, colname)
      SELECT row_number() OVER(ORDER BY Cou.Continent, Cou.Nation), Cou.Nation
      FROM   dbo.Nations Cou
      WHERE  EXISTS (SELECT *
                     FROM   Prospects C
                     JOIN   Orders O ON O.CustomerID = C.CustomerID
                     WHERE  C.Nation = Cou.Nation
                       AND  O.OrderDate BETWEEN @fromdate AND @todate)

   SELECT @sql="SELECT P.ProductName " + @lineend


; WITH sumcaselines AS ( SELECT colno, convert(nvarchar(MAX), concat(house(7), ' SUM(CASE C.Nation WHEN N', quotename(colname, ''''), ' THEN OD.Quantity END) AS ', quotename(colname))) AS sumcase FROM #pivotcols ) SELECT @sql += string_agg(sumcase, @lineend) WITHIN GROUP (ORDER BY colno) FROM sumcaselines

SELECT @sql += (SELECT concat(house(7), ' SUM(CASE C.Nation WHEN ', quotename(colname, ''''), ' THEN OD.Quantity END) AS ', quotename(colname), @lineend) FROM #pivotcols ORDER BY colno FOR XML PATH(''), TYPE).worth('.', 'nvarchar(MAX)') SELECT @sql = substring(@sql, 1, len(@sql) - len(@lineend))

SELECT @sql += ' FROM dbo.Orders O JOIN dbo.[Order Details] OD ON O.OrderID = OD.OrderID JOIN dbo.Prospects C ON C.CustomerID = O.CustomerID JOIN dbo.Merchandise P ON P.ProductID = OD.ProductID WHERE O.OrderDate BETWEEN @fromdate AND @todate GROUP BY P.ProductName ORDER BY P.ProductName' IF @debug = 1 PRINT @sql EXEC sp_executesql @sql, N'@fromdate date, @todate date', @fromdate, @todate END TRY BEGIN CATCH IF @@trancount > 0 ROLLBACK TRANSACTION ; THROW END CATCH

Let’s now have a look at some pattern executions. This name covers all the interval that NorthDynamic Merchants had been lively, so that is all orders within the database:

EXEC ProductCountrySales_sp '19960101', '19981231', 1

Right here is the start of the consequence set, cut up up over three pictures to suit the web page width:

You too can look within the Messages tab to see the generated question. Right here is an abbreviated model:

SELECT P.ProductName ,
        SUM(CASE C.Nation WHEN 'Côte d''Ivoire' THEN OD.Quantity END) AS [Côte d'Ivoire],
        SUM(CASE C.Nation WHEN 'Austria' THEN OD.Quantity END) AS [Austria],
        SUM(CASE C.Nation WHEN 'Belgium' THEN OD.Quantity END) AS [Belgium],
        ...
        SUM(CASE C.Nation WHEN 'Brazil' THEN OD.Quantity END) AS [Brazil],
        SUM(CASE C.Nation WHEN 'Venezuela' THEN OD.Quantity END) AS [Venezuela] 
     FROM   dbo.Orders O
     JOIN   dbo.[Order Details] OD ON O.OrderID = OD.OrderID
     JOIN   dbo.Prospects C ON C.CustomerID = O.CustomerID
     JOIN   dbo.Merchandise P ON P.ProductID = OD.ProductID
     WHERE  O.OrderDate BETWEEN @fromdate AND @todate
     GROUP  BY P.ProductName
     ORDER  BY P.ProductName

Take note of the primary nation, Côte d’Ivoire (recall that nations are primarily sorted by continents, so Africa comes first). With out quotename, that nation title would have produced syntax errors, because of the single quote and the house within the title.

In the event you run it for under two weeks in July 1997, you get far much less columns:

EXEC ProductCountrySales_sp '19970701', '19970714', 1

That is the primary 4 rows within the consequence set, which solely has six columns:

And right here is the SELECT checklist:

SELECT P.ProductName ,
        SUM(CASE C.Nation WHEN 'Austria' THEN OD.Quantity END) AS [Austria],
        SUM(CASE C.Nation WHEN 'Denmark' THEN OD.Quantity END) AS [Denmark],
        SUM(CASE C.Nation WHEN 'Germany' THEN OD.Quantity END) AS [Germany],
        SUM(CASE C.Nation WHEN 'Italy' THEN OD.Quantity END) AS [Italy],
        SUM(CASE C.Nation WHEN 'Canada' THEN OD.Quantity END) AS [Canada],
        SUM(CASE C.Nation WHEN 'USA' THEN OD.Quantity END) AS [USA],
        SUM(CASE C.Nation WHEN 'Brazil' THEN OD.Quantity END) AS [Brazil] 
     FROM   Orders O
     ...

We are going to now transfer on to the subsequent instance, which is ProductEmployeeSales_sp. On this process we wish to see gross sales per product and worker throughout a interval, with the staff as columns, sorted by their whole gross sales. We must always solely have columns for workers who really had gross sales within the interval. On this instance, we additionally need grand totals, each per product and per worker.

This results in a few adjustments from the earlier instance. To start out with, we want a separate filterval column in #pivotcols, since we wish to filter by EmployeeID, whereas for the column names we wish the names of the staff. Subsequent, due to the Grand Whole column, step 4 is non-empty. Because it seems, this step doesn’t present itself by a separate line of code, however fairly by modifications to the code for steps 3 and 5. The Grand Whole column itself is just added first to the section with the FROM-JOIN clauses. And due to this column, the final dynamic column ought to now be adopted by a comma. Within the answer with string_agg, which means we have to add an additional @lineend. Then again, within the variant with FOR XML PATH, which means we must always omit the substring operation that removes the ultimate @lineend, so this half is definitely shorter when we have now any further columns.

Since we additionally need grand totals per worker, we have to use the GROUPING SETS clause within the GROUP BY clause, in the identical method as within the earlier instance with a static question. (Which, as famous, has little to do with dynamic pivot as such, however since it isn’t unusual to ask for it, I’ve added this to the instance.)

Right here is the complete code, with the 2 variations for step 3 in frames. I’ve additionally highlighted the variations as regards to the primary instance.

CREATE OR ALTER PROCEDURE ProductEmployeeSales_sp @fromdate date, 
                                                  @todate   date, 
                                                  @debug    bit = 0 AS
BEGIN TRY
   DECLARE @lineend char(3) = ',' + char(13) + char(10),
           @sql     nvarchar(MAX)
   
   CREATE TABLE #pivotcols (colno     int      NOT NULL PRIMARY KEY,
                            filterval int      NOT NULL UNIQUE,
                            colname   sysname  NOT NULL UNIQUE)
                            
   ; WITH OrderAgg AS (
       SELECT EmployeeID, SUM(OD.Quantity) AS Quantity
       FROM   dbo.Orders O
       JOIN   dbo.[Order Details] OD ON O.OrderID = O.OrderID
       WHERE  OrderDate BETWEEN @fromdate AND @todate
       GROUP  BY O.EmployeeID
   )
   INSERT #pivotcols(colno, filterval, colname)
      SELECT row_number() OVER(ORDER BY OA.Quantity DESC, E.EmployeeID), E.EmployeeID, 
             E.FirstName + ' ' + E.LastName
      FROM   dbo.Staff E 
      JOIN   OrderAgg OA ON E.EmployeeID = OA.EmployeeID

   SELECT @sql="SELECT P.ProductName " + @lineend


; WITH sumcaselines AS ( SELECT colno, convert(nvarchar(MAX), concat(house(7), ' SUM(CASE O.EmployeeID WHEN ', filterval, ' THEN OD.Quantity END) AS ', quotename(colname))) AS sumcase FROM #pivotcols ) SELECT @sql += string_agg(sumcase, @lineend) WITHIN GROUP (ORDER BY colno) + @lineend FROM sumcaselines

SELECT @sql += (SELECT concat(house(7), ' SUM(CASE O.EmployeeID WHEN ', filterval, ' THEN OD.Quantity END) AS ', quotename(colname), @lineend) FROM #pivotcols ORDER BY colno FOR XML PATH(''), TYPE).worth('.', 'nvarchar(MAX)') -- Nothing right here.

SELECT @sql += ' SUM(OD.Quantity) AS [Grand Total] FROM dbo.Orders O JOIN dbo.[Order Details] OD ON O.OrderID = OD.OrderID JOIN dbo.Merchandise P ON P.ProductID = OD.ProductID WHERE O.OrderDate BETWEEN @fromdate AND @todate GROUP BY GROUPING SETS (P.ProductName, ()) ORDER BY GROUPING (P.ProductName), P.ProductName' IF @debug = 1 PRINT @sql EXEC sp_executesql @sql, N'@fromdate date, @todate date', @fromdate, @todate END TRY BEGIN CATCH IF @@trancount > 0 ROLLBACK TRANSACTION ; THROW END CATCH

It’s possible you’ll notice that there’s a CTE when constructing #pivotcols. This CTE computes the gross sales quantity for the staff through the interval in a relational approach, so we will use this gross sales quantity to order the columns.

Right here is an instance name:

EXEC ProductEmployeeSales_sp '19970201', '19970214', 1

It lists 4 (out of 9) Staff. Listed here are the primary few rows within the consequence set

The final instance is ProductContinentSales_sp. It’s much like ProductCountrySales_sp, however right here we wish sub-totals per continent in addition to a grand whole. To simplify issues, we ignore whether or not a rustic has gross sales within the interval, however we embrace all nations from which there’s at the least one buyer. I primarily depart the instance uncommented, however I’ll level out that filterval is once more absent from #pivotcols, however however there’s a assist column which is utilized in step 3 the place we wish to filter on the nation title for the common columns, whereas for the sub-total columns we wish to filter on the continent title. I present this step with FOR XML solely.

CREATE OR ALTER PROCEDURE ProductContinentSales_sp @fromdate date,
                                                   @todate   date,
                                                   @debug    bit = 1 AS
BEGIN TRY
   DECLARE @lineend char(3) = ',' + char(13) + char(10),
           @sql nvarchar(MAX)

   CREATE TABLE #pivotcols (colno       int       NOT NULL PRIMARY KEY,
                            colname     sysname   NOT NULL UNIQUE,
                            IsContinent bit       NOT NULL)

   INSERT #pivotcols (colno, colname, IsContinent)
      SELECT row_number() OVER(ORDER BY Continent, IsContinent, Nation), 
             IIF(IsContinent = 1, Continent, Nation), IsContinent
      FROM   (SELECT DISTINCT C.Nation, cou.Continent, 0 AS IsContinent
              FROM   dbo.Prospects C
              JOIN   dbo.Nations cou ON C.Nation = cou.Nation
              UNION ALL
              SELECT DISTINCT NULL, cou.Continent, 1 AS IsContinent
              FROM   dbo.Prospects C
              JOIN   dbo.Nations cou ON C.Nation = cou.Nation) AS u

   SELECT @sql="SELECT P.ProductName " + @lineend

   SELECT @sql +=
       (SELECT concat(house(7), ' SUM(CASE ',
                       IIF(IsContinent = 1, 'cou.Continent', 'C.Nation'),
                       ' WHEN ', quotename(colname, ''''), 
                       ' THEN OD.Quantity END) AS ', quotename(colname), @lineend)
        FROM   #pivotcols
        ORDER  BY colno
        FOR XML PATH(''), TYPE).worth('.', 'nvarchar(MAX)')

   SELECT @sql += '
              SUM(OD.Quantity) AS [Grand Total]
       FROM   dbo.Orders O
       JOIN   dbo.[Order Details] OD ON O.OrderID = OD.OrderID
       JOIN   dbo.Prospects C  ON C.CustomerID = O.CustomerID
       JOIN   dbo.Nations cou ON cou.Nation = C.Nation
       JOIN   dbo.Merchandise P ON P.ProductID = OD.ProductID
       WHERE  O.OrderDate BETWEEN @fromdate AND @todate
       GROUP  BY GROUPING SETS (P.ProductName, ())
       ORDER  BY GROUPING(P.ProductName), P.ProductName'

   IF @debug = 1
      PRINT @sql
   EXEC sp_executesql @sql, N'@fromdate date, @todate date', @fromdate, @todate
END TRY
BEGIN CATCH
   IF @@trancount > 0 ROLLBACK TRANSACTION
   ; THROW
END CATCH
go
EXEC ProductContinentSales_sp '19960101', '19981231', 1

Because the consequence set is pretty extensive, I do not embrace it in full right here, however I solely present the final couple of columns which embrace two continent totals in addition to the grand whole.

Conditions The place You (Most likely) Ought to Not Use Dynamic SQL

On this chapter I’ll talk about conditions the place you could be inclined to make use of dynamic SQL, however the place this might not be the most effective of selections. It might be that you’ve made a mistake earlier in your design, or it is also that there’s an alternate which doesn’t require dynamic SQL.

Dynamic Desk Names in Software Code

It isn’t unusual to see questions in boards the place the poster says that the desk title for a question in a saved process must be dynamic. In the event you really feel the urge to do that for one thing that can run as an everyday a part of an software, it is extremely possible that you’re barking up the improper tree. To make clear, I am speaking about queries – SELECT, INSERT, UPDATE, DELETE, MERGE. For one thing that includes DDL it might be a distinct factor, however you not often have DDL working as a part of the every day operations of an software.

Almost certainly, that is the results of some mistake earlier within the design. In a relational database, a desk is meant to mannequin a novel entity with a definite set of attributes (that’s, the columns). Sometimes, completely different entities require completely different operations. So for those who suppose that you just wish to run the identical question on a number of tables, one thing just isn’t the best way it ought to be.

By the years, I’ve noticed multiple motive why individuals wish to do that, and on this part I’ll cowl a few misconceptions. In case you have encountered a scenario the place you wished to make use of dynamic desk names in software code that I do not cowl right here, please drop me a line on esquel@sommarskog.se.

Course of-Distinctive Temp Tables

This can be a confusion of very inexperienced SQL customers, who learnt dynamic SQL far too early. They need a piece desk that’s distinctive to the present consumer, in order that they go on and create a desk like WorkTable_98 the place 98 is the spid of the present course of and all entry to that desk must be by dynamic SQL. Most readers of this textual content know that the true answer is to make use of a neighborhood temp desk, one with a reputation beginning with a single quantity signal (#). Every such desk is simply seen for the present course of, and it’s routinely dropped when the scope the place it was created exits. Thus, there is no such thing as a have to create dynamically named tables on this case.

The Over-Zealous Will to Generalise

It isn’t unusual in a database that there are various lookup tables that every one look the identical: there’s an id, a reputation and possibly some generic auditing columns. This leads some individuals to suppose that they need to generalise this, so as an alternative of getting one set of CRUD (Create, Learn, Replace, Delete) procedures for every desk, they need a single set the place the desk title is a parameter.

It follows from what I mentioned above that that is improper. The tables mannequin completely different entities, and there ought to be one set of procedures for every desk precisely because of this. In the event you really feel that this implies a variety of boring coding, you’ll be able to write a program that generates these procedures, so including a brand new lookup desk is a breeze. My expertise is that such tables is not going to all the time stay carbon copies of one another. That’s, through the lifetime of the system further attributes will likely be recognized for a few of these lookup tables, so by time these procedures will deviate from one another just because they need to be completely different.

A aspect notice right here: for those who do not use saved procedures in any respect however create your SQL statements in shopper code, it’s a complete completely different factor. Right here you’ll be able to have a generic class for all these lookup tables that has the frequent generic code, and you may instantiate subclasses for every lookup desk, including further properties for further columns, overriding strategies and many others. However T‑SQL doesn’t lend itself to these sorts of issues.

I do not know, however possibly some individuals who wish to do that suppose they’re saving assets by lowering the variety of saved procedures. However remember the fact that every batch of dynamic SQL is a anonymous saved process, so it’s all the identical. In case you have 100 procedures for 100 tables, or you probably have a single process to your hundred tables, that process produces 100 anonymous saved procedures producing 100 question plans, similar to 100 static procedures do.

Multi-Tenant Functions

While you design a multi-tenant software there are a few prospects. One is to have a single database, and have a TenantID in each desk. Then you definitely carry the tenant ID round in all of your saved procedures. That is certainly not an unusual design. It does include a danger, although: if an informal developer messes up, the info for one tenant could be uncovered for one more tenant.

For that reason, some individuals want to have one database (or one schema) per tenant. However then they suppose that they solely need one set of saved procedures which takes the title of a tenant database as a parameter, “as a result of every part else could be a upkeep nightmare”. I submit that that is completely improper. Having this single set of saved procedures is a complete upkeep nightmare, since about each line of code must be suffering from dynamic SQL. No, you need to have a set of saved procedures in each database (or each schema), after which it’s good to develop strategies to distribute and roll out code and desk adjustments to all databases or schemas. Within the days of DevOps this shouldn’t be overly difficult.

Word: Clearly, for those who do not use saved procedures in any respect, however submit all queries from shopper code, this situation is sort of easy. For a multi-database software, the database is set by the connection string. For a multi-schema software, you’d most likely have private logins the place every consumer has the suitable default schema.

Attribute as A part of the Desk Identify

On this sample it is extremely clear that it’s a case of a number of tables modelling the identical entity. As an alternative of a single Orders desk, there’s Orders_ALFKI, Orders_VINET and many others with one desk per buyer. Or Orders_1996, Orders_1997 and many others with one desk per 12 months. That’s, what actually is an attribute of the entity and ought to be a column in a single desk has as an alternative been added as a part of the desk title. That is not often, if ever, an accurate design.

The precise prepare of ideas that results in this design just isn’t totally clear to me. However possibly the individuals who use this sample haven’t totally absorbed the relational mind-set. This forum thread can function an illustration. On this specific case, it seems that it by no means occurred to the poster that his CodNeg ought to be a column in his temp desk.

House-Brewed Partitioning

It seems that typically when individuals put a key worth within the desk title, they’ve a priority that one single desk could be too large for SQL Server. That anxiousness just isn’t very well-founded relating to common day-to-day operations. With appropriate indexes you’ll be able to shortly discover the info you might be in search of, and it doesn’t matter if the whole desk dimension is 1 TB. And if it’s good to scan all that knowledge, it’s good to scan it, regardless of whether it is one desk or hundred tables. However it’s simpler to jot down the question if it’s a single desk or a view.

That mentioned, there are conditions the place you could wish to partition your knowledge. Listed here are some potential causes:

  • You wish to distribute the info on a number of filegroups to unfold the load, probably placing older knowledge on slower drives.
  • You need to have the ability to age previous knowledge shortly. As an example, say that you just wish to retain all orders for 12 months. So when June 2020 begins, you wish to eliminate all orders from June 2019, and you do not wish to run a DELETE assertion, which might take a very long time to finish.
  • A reasonably superior situation: completely different clients have very completely different patterns, so that you need completely different plans for a similar question relying on the shopper, and to that finish you need completely different histograms within the statistics for various clients.

However in none of those circumstances it is smart to create Orders_201906, Orders_201907 and many others, or Orders_ALFKI, Orders_VINET and many others after which use dynamic SQL to entry the desk you want for the second. To wit, SQL Server supplies not one, however two completely different partitioning options: partitioned tables and partitioned views. I’ll give a really fast overview of them right here after which present hyperlinks for additional studying.

Partitioned tables is the extra widespread characteristic. They had been launched in SQL 2005 and for a very long time they had been solely obtainable in Enterprise Version, however this modified with the discharge of SQL 2016 SP1, and they’re now obtainable in all editions. To create a partitioned desk, you first create a partition operate to outline the ranges for the partitions. Then you definitely outline a partition scheme that maps the partition ranges to completely different filegroups, and at last you create the desk on that partition scheme, utilizing it as if it was a filegroup. The characteristic helps as much as 15 000 partitions for a single desk.

Partitioned views have been round even longer; they had been launched already in SQL 2000, and so they have all the time been obtainable in all editions of SQL Server. A partitioned view consists of numerous particular person tables that every one have a standard construction and all have the identical primary-key columns. For the partitioning column, which ought to be a part of the PK, every desk has a CHECK constraint outlined so {that a} sure worth suits into at most one desk. The view itself consists of numerous SELECT queries in opposition to these tables united with UNION ALL. With partitioned views, you’d certainly have tables like Orders_201906 or Orders_ALFKI, however all entry could be by the view Orders. While you question the view, the plan contains all tables within the view, however the plan is such that solely the tables certified by the partitioning key are literally accessed at run-time. I’ve not seen any higher restrict for the variety of tables you’ll be able to have in a partitioned view, however I’d say that 100 could be very many, and in observe you need to keep beneath 20.

A really specialised type of partitioned views are distributed partitioned views, the place the tables are on a number of SQL Server cases. That is fairly a complicated characteristic, and it’s only supported in Enterprise Version.

Usually, all tables in a partitioned view have the identical set of columns, however there’s room for some flexibility, so for those who discover that you just want new columns in your Orders view ranging from January 2021, you’ll be able to depart the tables Orders_202012 and earlier unchanged and in simply put NULL within the SELECT from these tables within the view definition. Or if it’s good to have further columns for the shopper ALFKI, and solely that buyer, you’ll be able to have these columns in Orders_ALFKI, and never embrace them within the view. Queries that entry these columns would entry Orders_ALFKI straight and never the view Orders.

Of the three bullet factors I listed above, partitioned tables and partitioned views each help the primary two, however you’ll be able to solely obtain the final bullet level with partitioned views the place every desk has its personal histogram. (Though it might be potential to get good-enough outcomes with filtered statistics when the info is in a single desk.)

With regards to the second level, partition switching, that is an operation that requires a number of steps, not the least if multiple (logical) desk is concerned, for example each an Orders and an OrderDetails desk. If that is going to occur on the 1st of every month, you actually wish to automate that operation in a saved process that you just schedule. Regardless of whether or not you utilize partitioned tables or partitioned views, this process might want to use an entire lot of dynamic SQL to create a brand new partition or a brand new desk and to eliminate the previous one. I am not together with an instance of how to do that right here, however I wish to level out that that is an instance of good use of dynamic SQL. You may have dynamic SQL to carry out a selected process inside a single saved process. Examine that with dynamic desk names the place you litter the code in all places.

This was a really temporary introduction to partitioning in SQL Server. For a considerably longer introduction with code samples and many others, Knowledge Platform MVP Catherine Wilhemsen has a two-part weblog publish Table Partitioning in SQL Server which covers partitioned tables. Stefan Delmarco has a superb article on partitioned views, SQL Server 2000 Partitioned Views. (This text is a little bit previous, however the characteristic has not modified a lot since.)

Dynamic Column Names

After which there are customers who mysteriously suppose that SQL Server can learn their minds, so in the event that they do

SELECT @colname FROM tbl

SQL Server will really return the column of which the title is in @colname. Frequent can be that they wish to do:

UPDATE tbl
SET    @colname = @worth
WHERE  keycol = @id

However it will solely set the variable @colname to the worth of @worth – if there’s a row within the desk the place keycol is the same as @id.

In the event you recall what I mentioned within the earlier part, you recognize why that is improper: every column is meant to mannequin a novel attribute of the entity that the desk fashions. For that reason, completely different attributes (i.e., columns) are likely to have completely different domains for the values they’ll take, which makes the above fairly a bit pointless.

Thus, I’d submit that for those who really feel that it’s good to do one thing just like the above, there’s an error someplace within the design. One scenario I can consider is that you’ve a desk with columns like SalesAmt_Jan, SalesAmt_Feb and many others. The correct design would after all be to have a sub-table SalesAmts with one row per thirty days. Nevertheless, you could be within the unlucky scenario that you’re caught with a design like this. In that case, my desire is to make use of a IIF expression, how boring it might be:

UPDATE tbl
SET    SalesAmt_Jan = IIF(@colname="SalesAmt_Jan", @salesamt, SalesAmt_Jan),
       SalesAmt_Feb = IIF(@colname="SalesAmt_Feb", @salesamt, SalesAmt_Feb),
       ...
FROM   tbl
WHERE  keycol = @id

On SQL 2008 and earlier, you would need to use CASE as an alternative of IIF.

One other false impression I’ve often sensed is that some builders are below the impression that updating twenty columns will use extra assets than updating a single one. A traditional UPDATE assertion in a saved process that helps a UI might look one thing like this:

UPDATE tbl
SET    col1 = @val1,
       col2 = @val2, 
       ...
       col20 = @val20
WHERE  keycol = @id

Now, say that the consumer really solely modified the worth for col1 and the variables from @val2 to @val20 maintain the present values for col2 to col20. On this case they suppose that is leaner on the server:

UPDATE tbl   
SET    col1 = @val1 
WHERE  keycol = @id

I really ran some assessments on this, and my conclusion is that if there aren’t any indexes on the columns col2 to col20 there is no such thing as a distinction in any respect. I discovered that the quantity of transaction log produced was precisely the identical for the 2 statements. When any of the additional columns had been current in an index, I did see some improve within the quantity of transaction log produced. Nevertheless, since this usually is a matter of single-row updates, the precise impression is prone to be totally negligible. Then again, if you’re producing UPDATE statements that solely embrace the columns for which there are adjustments, you’ll generate one UPDATE assertion for every column mixture, and every of those will lead to a cache entry, so could be taking on extra assets this fashion.

Set Column Aliases Dynamically

The will right here is do to one thing like this:

SELECT col AS @myname FROM tbl

I’d guess that the commonest situation is a semi-dynamic pivot. That’s, the variety of columns is mounted, however you need the header to replicate the contents of the column. As one instance, say that you just wish to return gross sales per worker for the final three months, with the staff as rows and the months as columns, and also you need year-month within the column header. In lots of circumstances that is one thing you’ll be able to deal with within the presentation layer. In the event you nonetheless had been to do that in SQL, you’ll be able to implement this as a dynamic pivot based on the sample I launched earlier. However it’s in truth potential to realize this specific purpose with out dynamic SQL. Insert the lead to a temp desk with mounted column names, after which rename the columns as desired. Here’s a process to do that for the given instance. Whereas the situations are a little bit extra advanced, you’ll be able to recognise the sample of SUM with CASE filters from the part on dynamic pivot.

CREATE OR ALTER PROCEDURE AmountPerEmployee @yearmonth char(6) AS
BEGIN TRY
   CREATE TABLE #temp (EmployeeID int            NOT NULL PRIMARY KEY,
                       month1     decimal(10, 2) NOT NULL,
                       month2     decimal(10, 2) NOT NULL,
                       month3     decimal(10, 2) NOT NULL)

   DECLARE @month1 char(6) = 
              convert(char(6), dateadd(MONTH, -2, @yearmonth + '01'), 112),
           @month2 char(6) =
              convert(char(6), dateadd(MONTH, -1, @yearmonth + '01'), 112),
           @nextmon date = dateadd(MONTH, 1, @yearmonth + '01') 

   INSERT #temp (EmployeeID, month1, month2, month3)
      SELECT EmployeeID, 
             SUM(CASE WHEN O.OrderDate >= @month1 + '01' AND
                           O.OrderDate < @month2 + '01' THEN OD.Quantity END), 
             SUM(CASE WHEN O.OrderDate >= @month2 + '01' AND
                           O.OrderDate < @yearmonth + '01' THEN OD.Quantity END),
             SUM(CASE WHEN O.OrderDate >= @yearmonth + '01' AND
                           O.OrderDate < @nextmon THEN OD.Quantity END)
      FROM   Orders O
      JOIN   (SELECT OrderID, SUM(Quantity) AS Quantity
              FROM   [Order Details]
              GROUP  BY OrderID) AS OD ON O.OrderID = OD.OrderID
      WHERE  O.OrderDate >= @month1 + '01'
        AND  O.OrderDate < @nextmon
      GROUP  BY EmployeeID

   EXEC tempdb..sp_rename '#temp.month1', @month1, 'COLUMN'
   EXEC tempdb..sp_rename '#temp.month2', @month2, 'COLUMN'
   EXEC tempdb..sp_rename '#temp.month3', @yearmonth, 'COLUMN'

   SELECT E.FirstName, E.LastName, t.*
   FROM   Staff E
   JOIN   #temp t ON E.EmployeeID = t.EmployeeID
END TRY
BEGIN CATCH
   IF @@trancount > 0 ROLLBACK TRANSACTION
   ; THROW
END CATCH
go
EXEC AmountPerEmployee '199802'

Word that this method requires that you just return the columns from the temp desk with SELECT *; since you do not know the names, you can not checklist them.

For less than three months, a full dynamic pivot could be a little bit of an overkill. Then again, if the requirement could be for twenty-four months, the answer above just isn’t life like; the considered 24 month variables is simply repelling. You would nonetheless stick with the principle concept, and have the months in a temp desk, and you then loop over that temp desk for the renaming half. Nevertheless, you’d nonetheless want 24 SUM/CASE rows within the SELECT checklist, and it isn’t unlikely that you’d want a completely dynamic pivot on this case.

Dynamic Process Names

On this case, the programmer is wanting into calling a saved process of which the title is outlined at run-time. Perhaps as a result of the database title is dynamic, possibly for another motive. There isn’t a want for dynamic SQL right here, and you’ve got seen the answer a number of occasions on this doc: EXEC accepts a variable for the process title:

EXEC @spname

Nonetheless… I can not escape to notice that with some frequency I see even skilled programmers constructing a dynamic SQL assertion to deal with this example.

Dynamic ORDER BY Situation

In case your saved process returns a static question, however you wish to allow the consumer to pick out the order of the rows, the most effective is to type knowledge client-side. Not the least as a result of, then you’ll be able to allow the consumer to re-sort the info with out having to re-run the question. There are a number of grid controls on the market in the marketplace that can assist you with this.

However if you wish to do it in SQL, you should use CASE:

ORDER BY CASE @ordercol = thatcol THEN thatcol END, 
         CASE @ordercol = thiscol THEN thiscol END, 
         ...
OPTION (RECOMPILE)

The addition of OPTION (RECOMPILE) will help you to get a question plan which inserts greatest with the chosen type situation.

Admittedly, you probably have a number of type columns and also you additionally wish to allow the consumer to pick out between ASC and DESC, this may get out of hand, and an answer with dynamic SQL could be much less advanced to implement. However in such case you have to be cautious and never simply do one thing like

@sql += ' ORDER BY ' + @sortcols

since would make you extensive open to SQL injection. I briefly mentioned potential approaches earlier within the part Other Injection Holes. In my article Dynamic Search Conditions, I’ve some extra detailed examples.

SELECT * FROM tbl WHERE col IN (@checklist)

This is likely one of the most typical misconceptions amongst inexperienced SQL customers. They do one thing like:

SELECT @checklist="1,2,3"
SELECT * FROM tbl WHERE col IN (@checklist)

After which they marvel why it doesn’t work, once they get no rows again. The reply is that it does work. If there’s a row the place the worth of col is 1,2,3, they’ll get that row again. However rows the place col = 1, no. To wit, col IN (1, 2, 3), is only a syntactical shortcut for

col = 1 OR col = 2 OR col = 3

and col IN (@checklist) is similar as col = @checklist.

Fortunately it isn’t as frequent because it was once, however you’ll be able to nonetheless see individuals who reply such questions by suggesting dynamic SQL. Which is a tremendously poor answer for this drawback. Opens for SQL injection, makes the code extra advanced, and neither is it good for question efficiency.

The proper answer is to make use of a list-to-table operate, and there are many them on the market. Beginning with SQL 2016, there’s even a built-in operate, string_split, for the duty, though it isn’t all the time the only option. See additional my article Arrays and Lists in SQL Server.

Sending in a WHERE Clause as a Parameter

That is one thing I largely coated already within the part Other Injection Holes within the chapter on SQL injection, however I might wish to make a brief recap for instance of dynamic SQL abuse.

I’ve often seen customers who wish to ship in a chunk of SQL syntax to a saved process, and I do not imply a small piece like a column checklist for an ORDER BY clause, however a extra substantial half like a full WHERE clause. Let me make this completely clear: so long as we’re speaking about plain software code, that is an absolute no-no.

There could also be conditions the place the necessities are extra dynamic than you’ll be able to match into the pretty rigid T‑SQL syntax, and also you fairly do the work client-side. And that is alright. If you wish to allow customers to pick out filters, what columns to return, what to mixture on and many others, I feel that is greatest solved client-side. However then you need to do all the work within the shopper for this specific operate, and never go SQL syntax to a saved process. Not solely would this saved process be open to SQL injection, however you’d additionally create a really tight coupling between shopper and database which is able to trigger a upkeep nightmare.

Working with a Desk with Unknown Columns

Each from time to time, I see these posters on the boards who’re sad as a result of the temp desk they created inside their dynamic SQL is not there when the dynamic SQL exits. You inform them that they should create the temp desk earlier than working the dynamic SQL batch, pondering that they use SELECT INTO solely as a result of they’re lazy bums. However, no, they reply they cannot do this, as a result of their temp desk is dynamic.

I’ve fairly an issue with addressing their underlying fallacy, as a result of regardless of that I’ve seen this sample many occasions over time, I’ve by no means been in a position to totally perceive how they ended up on this depressing scenario. When I’ve requested these individuals to explain their drawback from begin to finish, I’ve at greatest gotten a fragmentary reply, and the thread has died unresolved.

I’ve a concept, although, and that’s that their dynamic temp desk is the results of a dynamic pivot. As I mentioned within the part on dynamic pivot, that is non-relational operation, and after you have carried out your dynamic pivot, there is just one factor you are able to do with the consequence: return the info to the shopper. You merely will not be in Kansas anymore, Toto. If you wish to carry out some filtering, computations or no matter in your knowledge after the dynamic pivot, it’s good to return to the drafting board and rearrange your computation order and carry out these actions earlier than you do the dynamic pivot. The dynamic pivot should all the time be the very last thing within the SQL a part of your answer. Remaining work must happen client-side.

One situation that I’ve seen one or two posters allude to was that they had been loading information, and the information had a distinct format each time. That alone sounds suspicious, however for those who actually have such a scenario, it doesn’t appear to be an excellent process for an all-SQL answer, however this ought to be orchestrated from a client-side language or one thing like SSIS (SQL Server Integration Providers).

All that mentioned, for those who completely need a temp desk with dynamic columns, there’s a option to do it. Create a stub desk along with your mounted columns. Then you definitely add the dynamic columns with ALTER TABLE ADD contained in the dynamic SQL. I am not displaying an instance, as a result of it’s undoubtedly nothing that I like to recommend. And it is actually nothing for customers with little expertise of SQL, since you will have to know easy methods to derive the metadata from the enter of the temp desk, so this can be a trick for skilled customers. I am tempted to say that you’d solely enter this path, if you actually need to like to harm your self.

Yet another various to take care of this example is to make use of the CLR. See the part Using the CLR in my article How To Share Knowledge Between Saved Procedures, the place I introduce the saved process ExecAndInsert which executes an SQL batch and creates a desk for every consequence set, primarily based on the metadata for the consequence set. I’ve used this process with success in a check harness, however it’s nothing I like to recommend for software code.

BACKUP/RESTORE

Since I’ve mentioned dynamic SQL is a good recreation for DBA duties, you could be stunned to see BACKUP/RESTORE on this chapter, however there’s a easy motive. It isn’t unusual to see code like this:

DECLARE @db         sysname="Northwind",
        @backuppath nvarchar(200) = 'C:temp',
        @sql        nvarchar(MAX)

SELECT @sql="BACKUP DATABASE " + quotename(@db) + 
              ' TO DISK = ''' + @backuppath + '' + @db + '-' +
              substitute(convert(varchar(19), sysdatetime(), 126), ':', '') + '.bak'''
PRINT @sql
EXEC(@sql)

So what’s improper right here? As a result of you are able to do it this easy:

DECLARE @db         sysname="NorthDynamic",
        @backuppath nvarchar(200) = 'C:temp',
        @filepath   nvarchar(250) 

SELECT @filepath = @backuppath + '' + @db + '-' +
                substitute(convert(varchar(19), sysdatetime(), 126), ':', '') + '.bak'
BACKUP DATABASE @db TO DISK = @filepath

That’s, BACKUP and RESTORE settle for variables for many or all of their arguments. Here’s a working instance for RESTORE:

DECLARE @db       sysname="BigDB",
        @dmpname  nvarchar(250) = 'S:MSSQLBigDB.bak',
        @dev1     sysname="BigDB",
        @mdfname  nvarchar(250) = 'S:MSSQLSJUTTONBigDB.mdf',
        @dev2     sysname="BigDB_log",
        @ldfname  nvarchar(250) = 'S:MSSQLSJUTTONBigDB.ldf',
        @stats    int = 10

RESTORE DATABASE @db FROM DISK = @dmpname
WITH MOVE @dev1 TO @mdfname,
     MOVE @dev2 TO @ldfname,
     STATS = @stats

(Change the database and file names to one thing you may have in your server to check.)

I’ve not verified each piece of the syntax for the BACKUP and RESTORE instructions. The syntax graphs in Books On-line point out that variables are supported in most locations, however there are additionally components the place variables don’t seem within the syntax. As an example, Books On-line doesn’t say that variables are supported with MOVE or STATS, however as you’ll be able to see within the instance above, they’re accepted.

That mentioned, there are conditions the place you will have to make use of dynamic SQL for these instructions. If you wish to make it a parameter whether or not to make use of COMPRESSION, INIT, FORMAT or some other such key phrase, you will have to construct a dynamic string – however you’ll be able to nonetheless go the database and the title of the backup file as parameters to sp_executesql.

One other scenario the place you want to make use of dynamic SQL is if you wish to use a striped backup. So long as the variety of stripes are mounted, there is no such thing as a drawback to do it statically:

BACKUP DATABASE @db TO DISK = @stripe1, @stripe2, @stripe3

However this breaks down, if you would like a variable variety of stripes relying on the database dimension or one thing else, for the reason that form of the string is determined by what number of stripes you need. You can nonetheless use sp_executesql to execute the command, for those who decide on a max variety of stripes, say 50, and all the time go @stripe1, @stripe2 as much as @stripe50, even when not all these @stripe variables seem within the BACKUP command you generate. No, I do not actually count on you to do that, and nor would I accomplish that myself, however I’d construct a BACKUP command with all of the stripes inlined, and presumably I’d have the title of the stripes (or some a part of the title) in a desk by some means. And I’d after all take precautions to keep away from SQL injection and wrap all stripes in quotename or quotestring.

The same case exists with RESTORE WITH MOVE. If you wish to deal with databases with any variety of information, you will have to work with dynamic SQL since you do not know beforehand what number of MOVE clauses you will have.

However for the plain-vanilla on a regular basis use of those instructions, there is no such thing as a want to make use of dynamic SQL. Merely use variables.

Conclusion

You may have now learn an article about dynamic SQL, and my hope is that you’ve learnt there are each good and dangerous issues you are able to do with dynamic SQL. Dynamic SQL could be a blessing to automate DBA duties, and dynamic SQL utilized in the suitable place could be a blessing in software code too. However utilized incorrectly, dynamic SQL could be a curse, producing code that’s virtually inconceivable to learn and preserve and which causes efficiency points and opens safety vulnerabilities. And it isn’t unusual that an urge to make use of dynamic SQL is because of a mistake earlier within the design the place you may have violated a number of of the fundamental guidelines for relational databases.

The very most necessary factors on this article in abstract:

  • At all times use parameterised statements. By no means inline a price that may be handed as a parameter!
  • While you construct strings with object names in variables: all the time use quotename.
  • When it’s good to inline string values, for example in DDL statements, use quotename with single quote because the second parameter, if you recognize that the worth is <= 128 characters. Else use dbo.quotestring or dbo.quotestring_n.
  • Watch out for SQL injection!
  • At all times embrace this line once you work with dynamic SQL:
    IF @debug = 1 PRINT @sql


    At all times!

Acknowledgement and Suggestions

My articles don’t exist in a vacuum, and I obtain many useful feedback from MVP colleagues, readers of my articles and different individuals. For this text I want to thank the next individuals for his or her enter: Anthony Faull, Chintak Chhapia, Jonathan Van Houtte, John Sensible and John Barrow..

In case you have feedback on the article, you might be greater than welcome to mail me on esquel@sommarskog.se. Your remark could also be on the precise contents, however you might be additionally greater than welcome to level out spelling and grammar errors. Or simply level out passages the place I might clarify issues higher. Then again, you probably have a selected drawback that you’re scuffling with, I encourage you to ask your query in a public discussion board, since it is extremely possible that you’ll get assist quicker that approach, as extra individuals will see your publish. If you need me to see the query, the most effective place is the sql-server-transct-sql tag in Microsoft Q&A discussion board, however I ought to warn you that point often doesn’t allow me to learn all posts there.

Revisions

2022-08-30

John Barrow was type sufficient to provide a radical proof-reading of the article, and I’ve corrected many spelling and grammar errors, of which some made the textual content considerably unintelligible in locations.

2021-09-03

Added a ultimate paragraph to the part
Working with a Table with Unknown Columns.

2021-01-01

John Sensible was type to share his features SanitizeObjectName and SanitizeObjectNameEx that are tailored from what I say within the part Receiving Names of Tables and Other Objects as Parameters,
and this part now additionally features a reference and a fast instance of his features.

2020-12-24

Added a paragraph final within the part Doing Something in All Databases a few script searchalldatabases.sql that lets you seek for a price in all databases and tables in your occasion.

2020-08-21

Enhanced the dialogue on greatest observe on utilizing two-part notation in chapter on Performance. A bit shamefully, I discovered that the examples within the part on Dynamic Pivot didn’t use two-part notation. This has been mounted.

2020-01-12

Jonathan Van Houtte was type to counsel a tip for the part The Necessity of Debug Prints for yet one more option to show lengthy SQL strings.

2020-01-11

First model. (Sure, there was an article with the identical title on the identical URL, however that is primarily a completely new article on the identical topic.)

Back to my home page.



Source Link

What's Your Reaction?
Excited
0
Happy
0
In Love
0
Not Sure
0
Silly
0
View Comments (0)

Leave a Reply

Your email address will not be published.

2022 Blinking Robots.
WordPress by Doejo

Scroll To Top