How do I prevent a trigger from firing for one process?

playing cards on fire

I recently saw a question on DBA Stack Exchange (it has since been deleted by the author), who had a “special process” that ran regularly, and as part of that process, they disabled the trigger, did some stuff, and re-enabled it. During that process, the step that disables the trigger would deadlock, and cause problems. So the asker was wondering how to catch & handle the deadlock during the DISABLE TRIGGER step.

Just disable the trigger, right?

Don’t disable the trigger.

Some backstory

I don’t have the original asker’s backstory, but I’ve run into scenarios like this in the past. One common scenario is that you’re adding a column to a table, then have a backfill script to go back & populate existing rows. You want that backfill script to run without tripping the trigger that sets a Modified Date, or does other audit tracking. Yes, the backfill is technically changing rows, but if it’s changing every row, it’s not necessarily useful to update every single row to show a modified date/time of the backfill.

More generically, it might be a case of ignoring system-generated update, and only tracking user-generated edits. If that’s your business requirement, what’s the best way to enforce it?

Let’s try it

First, lets create a table, and put a trigger on it. The trigger just populates LastModifiedUTC with “now” for every insert or update:

USE [/dev/null]
GO
DROP TABLE IF EXISTS dbo.SomeTableWithATrigger;
GO

CREATE TABLE dbo.SomeTableWithATrigger (
    SomeId           int identity(1,1),
    SomeStuff        varchar(100),
    LastModifiedUTC  datetime2(0)
    CONSTRAINT PK_SomeTableWithATrigger PRIMARY KEY CLUSTERED (SomeId)
    );
GO

CREATE OR ALTER TRIGGER AuditSomeTable
    ON dbo.SomeTableWithATrigger
    AFTER INSERT, UPDATE
AS
    UPDATE t
    SET LastModifiedUTC = GETUTCDATE()
    FROM dbo.SomeTableWithATrigger t
    JOIN inserted i ON i.SomeId = t.SomeId;
GO

If we insert a couple rows into that table, we see that the audit column gets updated appropriately:

USE [/dev/null]
GO
INSERT INTO dbo.SomeTableWithATrigger (SomeStuff)
VALUES ('Elmo'),('Big Bird');

SELECT *
FROM dbo.SomeTableWithATrigger
GO

We’re in business! But software development moves forward, and we need to add a new column.

USE [/dev/null]
GO
ALTER TABLE dbo.SomeTableWithATrigger
ADD OtherStuff varchar(100);

Let’s pretend this column is going to store a computed/cached value as an optimization to avoid always doing the calculation on the fly. We want to backfill it without “polluting” the existing values for LastModifiedUTC. How do we do it?

It’s pretty common to disable a trigger for this type of thing, but I hate doing that. If your database has any concurrency with real users, then the trigger will not fire for anyone while you do your backfill. That means there will be instances where it should fire, and didn’t. That’s not good, so I’m going to call that the “wrong approach” because it fixes the problem for me, but causes incorrect data to be saved for everyone else! There is a significant difference between “disable the trigger while the process runs” and “disable the trigger for just the one process.”

Instead, we need to modify the trigger. I want to add a block of code at the start that looks something like this:

    IF (<some test to determine if its my backfill>)
    BEGIN
        RETURN;
    END;

or maybe even like this

    WHERE NOT (<some test to determine if its my backfill>);

The goal is simply to recognize the “special case” and bail out of the trigger, or otherwise not perform the update. But… How do I accurately identify my backfill?

  • UserName – I don’t like this idea. I might be running the backfill, but I might also use the application normally & want the trigger to fire for my changes. I could use a dedicated database migration user, but that seems error-prone, too. I would need to have some real certainty that I never-ever want the trigger to fire for that user. But this could be a good option if you have a specific background service that should run stealth updates.
  • HostName – I don’t like this one for the same reason as I dislike UserName. It might be reliable for my current use case, but its hard to future-proof.
  • Various other session- or connection-level info – You know what… these all get the same argument. Why did I bother trying to list them out separately? No matter what I pick, it gets hard to future-proof and ensure that I don’t skip trigger executions for valid updates.

Pause and take a big step backwards, Andy. Forget “trying to identify the backfill.” We want to modify the trigger and leave it that way for future backfills. Even better, we’d like to use the same coding pattern on other triggers. Making this reliable and future-proof has high value, because I’m too lazy to do this repeatedly. Instead, I want to just explicitly tell the trigger “Hey! You don’t have to do anything this time. Pinky promise!” and I want a way for the trigger to validate that I am authorized to make that promise.

SESSION_CONTEXT

Introduced in SQL Server 2016, SESSION_CONTEXT allows for arbitrary key-value pairs that you set in your session, and can be used by your code. As the name suggests, it persists across batches for your session so that it can be maintained as long as your session is.

I can use sp_set_session_context to set a magic key, and the trigger can check that magic key to know whether to do the update or not. But anyone can use sp_set_session_context, and if they know the magic words (magic isn’t the same as secret), they’d be able to prevent triggers from firing.

Security roles

SQL Server has a concept of database roles. Let’s use them.

I can create a role in the database that contains only the system users and administrators that are authorized to do these magic, trigger-preventing updates. In some cases, you could leverage existing roles, and tie it to being a member of the sysadmin fixed server role, or the db_owner database role. However, those can be powerful roles, and you may very well want to allow more granular permission to circumvent triggers. In fact, I may not want to allow all sysadmins for db owners to be able to do this–I can be more restrictive with my code by checking a specific role! The last reason is maybe the most important–I want it to be obvious that being assigned to this role means you can do an end-run around the triggers, so I want to give it an name that represents the access it grants. Code, especially when it comes to security, should value readability & maintainability.

Time to do it

First, we need to create that role & give the right people access. Don’t make fun of my role name.

USE [/dev/null]
GO
CREATE ROLE SkipTriggers;
ALTER ROLE SkipTriggers ADD MEMBER AMtwo; 
ALTER ROLE SkipTriggers ADD MEMBER AmazingMumford; 
ALTER ROLE SkipTriggers ADD MEMBER DatabaseMigrationService; 

I’m going to use this in my session before I try to do the backfill. In production, I’ll probably use a more descriptive key-value pair, but where’s the fun in that? After that terrible role name, I needed some fun.

EXEC sp_set_session_context N'Magic Words', N'Ala Peanut Butter Sandwiches';

Now, we need to modify the trigger to use that role & the session context. Notice that I added the IF block at the top that just does a RETURN when the magic conditions are met:

USE [/dev/null]
GO
CREATE OR ALTER TRIGGER AuditSomeTable
    ON dbo.SomeTableWithATrigger
    AFTER INSERT, UPDATE
AS
    --We're just going to return from the trigger & skip execution 
    IF (SESSION_CONTEXT(N'Magic Words') = N'Ala Peanut Butter Sandwiches'
        AND IS_ROLEMEMBER('SkipTriggers') = 1 
    )
    BEGIN
        RETURN;
    END;

    UPDATE t
    SET LastModifiedUTC = GETUTCDATE()
    FROM dbo.SomeTableWithATrigger t
    JOIN inserted i ON i.SomeId = t.SomeId;
GO

Now I can run the backfill

I added three users to my SkipTriggers role, so the following will need to be run by one of those three users.

USE [/dev/null]
GO
EXEC sp_set_session_context N'Magic Words', N'Ala Peanut Butter Sandwiches';  

--Before
SELECT *
FROM dbo.SomeTableWithATrigger;

--Update
UPDATE t
SET OtherStuff = 'Sesame Street'
FROM dbo.SomeTableWithATrigger t
WHERE OtherStuff IS NULL;

--and after
SELECT *
FROM dbo.SomeTableWithATrigger;

Like magic, it updated the rows without changing the LastModifiedUTC column. If you try running it as a user that isn’t a member of the SkipTriggers role, or doesn’t use sp_set_session_context to set the magic words, then the update will trigger the LastModifiedUTC column to be updated correctly.

6 Comments

  1. Cool trick with the session context. I’d come across that “don’t run trigger” concept before, but hadn’t tied it to session values. We used a table that could turn the trigger on/off. The first thing it did was check to see if the value in that table was set to “on”. If so, it would run that block of code. That was a neat trick then, but this is even more interesting. We were on SQL 2005 at the time so used what we had.

    • The problem with using a table with a control “variable” in a table to turn off trigger logic is, that this works for everyone. So if you turn of the trigger in the table, the trigger is disabled for everyone (so you may do this only in a maintenance window). And of course someone could forget to switch the table value back after his script.

      If you are using CONTEXT_INFO or SESSION_CONTEXT it’s affects only your current session (so the app can still run in the background) and you can’t forget it, because when your session ends the “magic words” are gone too.

      • No argument on that one. I don’t know when the “Context_Info” was available, but this was way back in the 2005 or earlier days. The table worked well enough and was a good enough way at the time to handle it. But yes, there was a recognized need to be able to catch anything that happened while that block of code was disabled by a flag in the table. The good thing was that it wasn’t a highly critical piece – just something that did some text parsing and tweaked some values for display purposes. There was a process that could catch anything missed during the time that logic was disabled.

        Now the _need_ for that logic to be disabled … that’s a whole different issue, but it was a crazy bit of code that was written to only handle a single row at a time. The time to refactor it and regression test it was never a high priority so it stayed.

        I definitely won’t argue for this being any sort of best practice, but it was a decent workaround for a that particular problem at that time. I’m filing this away (and zikato’s mention of the temp table trick) for future use. 🙂

  2. I’ve used the trick by Itzik Ben-Gan. Test for Temp table inside the Trigger – for example #SkipTriggerLogic.

    Then in your code you can create the temp table because it’s context is also in nested modules.

  3. Prior to SQL 2016, we did this with the Context_Info flag that was available. A very handy tool.

1 Trackback / Pingback

  1. Preventing Triggers from Firing for a Single Process – Curated SQL

Comments are closed.