Running dynamic SQL on a database

You can call <dbname>.sys.sp_executeSQL to run dynamic SQL on a database, but what if you want to run dynamic SQL on a list of databases you have as strings?

I have used this to perform maintenance tasks on sets of related databases that could have different names (e.g. if they run multiple instances of our product’s databases on the same SQL server instance), with great success.

ALTER PROCEDURE [Admin].[Infrastructure_ExecuteSQL]
Runs SQL statements in this database or another database.
You can use parameters.

EXEC dbo.Infrastructure_ExecuteSQL 'SELECT @@version, db_name();', 'MCSAuditLog';

20180803 DKD

/* For testing.
DECLARE @sql NVARCHAR(MAX) = 'SELECT @@version, db_name();';
DECLARE @dbname NVARCHAR(MAX) = 'mcslottery';

DECLARE @proc NVARCHAR(MAX) = 'sys.sp_executeSQL';
IF (@dbname IS NOT NULL) SET @proc = @dbname + '.' + @proc;

EXEC @proc @sql;


Modifying a SOAP request on the client

Backstory: I’m connecting to a dinosaur of an ASMX webservice. It’s been working fine, but all of a sudden they tell us they are implementing 2FA on their service. Now, their service takes in and responds with XML that is encoded inside of a SOAP envelope. Had they included those things in the XML, I would have finished the changes in 30 minutes and washed my hands of the whole thing… but they needed it in the header.

This kind of stuff is easy with WCF, but you really have to work to modify the SOAP envelope in ASMX. Here’s my template, to save you some time in doing this in the future. You will have to open up reference.cs in your ASMX client code and add the attribute to the methods you want the output to be transformed.

Also check out https://www.hanselman.com/blog/ASMXSoapExtensionToStripOutWhitespaceAndNewLines.aspx for more details. That one is more of a server-side implementation but it did help me out.


    public class TransformSomethingExtension : SoapExtension
        private Stream newStream;
        private Stream oldStream;
        string BodyString;

        public override object GetInitializer(Type serviceType)
            return typeof(TransformSomethingExtension);

        public override object GetInitializer(LogicalMethodInfo methodInfo, SoapExtensionAttribute attribute)
            return attribute;

        public override void Initialize(object initializer)
            // get attribute
            TransformSomethingAttribute attr = initializer as TransformSomethingAttribute;

        public override void ProcessMessage(SoapMessage message)
            switch (message.Stage)
                case SoapMessageStage.AfterSerialize:
                case SoapMessageStage.AfterDeserialize:
                case SoapMessageStage.BeforeDeserialize:
					// You need this because you're using the new stream as the main stream now. 
					// I left this out at first, and it caused me a lot of headache to figure it out.
                    newStream.Position = 0L;
                case SoapMessageStage.BeforeSerialize:

        public override Stream ChainStream(Stream stream)
            this.oldStream = stream;
            this.newStream = new MemoryStream();
            return this.newStream;

        void Transform()
            newStream.Position = 0L;
            TextReader reader = new StreamReader(newStream);
            BodyString = reader.ReadToEnd();

            // This is where you transform BodyHeader as a string to what you need

            newStream.Position = 0L;
            TextWriter writer = new StreamWriter(oldStream);

            //oldStream.Position = 0L;


    public class TransformSomethingAttribute : SoapExtensionAttribute
        // Fields
        private int priority;

        // Properties
        public override Type ExtensionType
            get { return typeof(TransformSomethingExtension); }

        public override int Priority
            get { return this.priority; }
            set { this.priority = value; }

Multithreaded Task Execution on a Timer

Backstory: Every time one of our APIs is accessed, we record a log of the request and the response. That means for each one, the headers and the body of the message. It’s very useful for debugging issues with our own APIs as well as issues with vendors interfacing with us.

The data for each row can get big. Like, really big. A call to our Winning Numbers API could return a payload that’s a 1.5MB in size. There really isn’t too much I can do about that, besides paging, but the data may not be paged on the page that is consuming this API call. Of course, when I built the logging system, I didn’t think I would be recording 1.5MB records into this temporary database… until I started getting Win32Exception timeouts. Writing to a separate temp database. On SSDs. In a RAID configuration.


I needed something to check to see if I should even keep the logs. There’s a property on the vendors table to tell the system how many days to keep these logs for. They’re automagically flushed at the end of each day when there’s little load. Of course, having the ability for the system to record logs for a day is nice to have, so I don’t want to remove it, but I want to be able to shut it off. Because I currently can’t, having a 0 or a 1 in that field will do almost the same thing. So, I have to have something in my code to say ‘if that value is 0, then skip adding a log entirely’.

The next problem was, implementing that was fine, but I didn’t want to hit that table to lookup the value every time a log was added. I didn’t want to read it on the cctor of the class, so I decided reading the values every 30 seconds or so would be perfect – I could turn it on or off at will, to record some logs when problems are occurring and then turn it back off so I can analyze a portion without slowing things down too much.

The technique is basically setting up an object that will hold my values (in this case, a Dictionary) that will hold the vendor ID and the # of days to keep records; a lock object, since these are APIs and the lifecycle initializes the class, runs the method and destroys the class; a timer to execute the function on a schedule.

The following class allows me to achieve this. See below on how I used it. Remember, you must lock every time it is used or you may get a dirty read at best or worse yet NullReferenceException. (I almost said “at worst, ” but backtracked because I’m sure with some ingenuity something far worse could happen.)


using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Timers;

namespace MCS.Lottery.Infrastructure.Logic

/// Used to safely execute an action every so often.

public class MultithreadedAccessTaskExecuter

/// The object used in multiple threads.

public TObject Object { get; set; }

/// The lock object used when accessing from inside and outside of this class.
/// Use this in a lock statement if accessing from anywhere.

public object LockObject { get; } = new object();

/// The action to take when executing. Be sure to lock on the lock object, the first parameter,
/// before doing anything with the lock object.

private Action ActionToExecute { get; set; }

/// Repeats the action every N milliseconds.
/// If null, the action will not be repeated at an interval.

public double? ExecuteEveryMilliseconds
get { return timer.Interval; }
if (value == null)
timer.Enabled = false;
timer.Interval = value.Value;
timer.Enabled = true;


public void Execute()

private Timer timer = new Timer();

public MultithreadedAccessTaskExecuter(double? ms, Action whenExecuted)
if (whenExecuted == null)
throw new ArgumentNullException(nameof(whenExecuted));
ExecuteEveryMilliseconds = ms;
ActionToExecute = whenExecuted;
timer.Elapsed += Timer_Elapsed;

private void Timer_Elapsed(object sender, ElapsedEventArgs e)


How I used it:


private static Action waAction = new Action(() =>
lock (WalletApplicationExpiries.LockObject)
if (WalletApplicationExpiries.Object == null)
WalletApplicationExpiries.Object = new Dictionary();
using (var db = Data.DataFactory.LotteryDB())
var walletapps = db.WalletApplications.Select(w => new { w.ID, w.DaysToKeepAPIRecordsFor }).ToArray();
foreach (var w in walletapps)
WalletApplicationExpiries.Object[w.ID] = w.DaysToKeepAPIRecordsFor;

public static MultithreadedAccessTaskExecuter<Dictionary> WalletApplicationExpiries = new MultithreadedAccessTaskExecuter<Dictionary>(30000, waAction);


public static void TrackPerformance(APIPerformanceTrackingEntry entry)
if (entry == null) return; // do nothing.

short wakeepdays = 1;
if (entry.WalletApplicationID.HasValue)
lock (WalletApplicationExpiries.LockObject)
wakeepdays = WalletApplicationExpiries.Object[entry.WalletApplicationID.Value];

if (wakeepdays > 0)



Setting up SQL Server Replication

This is how to do it using the replication UI:

  1. Be sure that you are connected to the servers in SSMS on the server that is publishing the databases as the configured SA account, not a different login.
  2. You will need to have replication users set up – a Windows user as well as a SQL server user set up.
    • If this is a domain, only one domain user is required for replication. I use \ServiceRunner for this task. This user must have db_owner privileges on each database being replicated as well as db_owner on the system database called ‘distribution’ found on the master server. That database is created once initial setup of replication is done. If this is not on a domain, create the same user across both machines with the same password.
    • The SQL user – called 4LLrepl – must have the same permissions set up for the windows user above. This must be set up on both servers, the publisher and subscriber.
  3. If replication has never been set up on the server with the databases to be replicated, use the Configure Distribution Wizard to set this up by right clicking the Replication node and selecting that option.
    1. will act as its own distributor.
    2. The next screen asks for a path to set up replication – it has the local path to the replication root (e.g. C:\SQL\MSSQL13.MSSQLSERVER\MSSQL\ReplData), but it recommends a UNC fileshare path.
      1. Copy the path and open up Windows Explorer. Paste the path into the path bar.
      2. Go back a folder, right click the repldata folder, and select Sharing.
  • Share the folder allowing the Windows replication user read/write permissions.
  1. When done, copy the resulting path, \\\repldata
  2. Back in the wizard, paste the UNC path in and click Next.
  1. Next screen is distribution database details. Leave the defaults and click Next.
  2. Next screen is publisher defaults. Hit next.
  1. If not there, add the windows and sql replication users to SQL security.
    • If there are any unsafe assemblies in these databases, sysadmin server role is needed for both accounts.
    • The accounts must be db_owner for each database being replicated, as well as the system database called ‘distribution’.
    • The SQL user 4LLrepl also needs to exist on any subscriber servers as well.
    • If not using a domain account for ServiceRunner, the Windows account (ServiceRunner) would need to exist on each machine with the same password.
  2. The publisher’s firewall would need to be adjusted to allow the subscriber to connect.
  3. The subscriber’s firewall would need to be adjusted to allow the publisher to connect.
  4. Set up a publication for each database.
    • Right click the Local Publications node under the Replication node in the treeview for the publisher server and click New Publication.
    • Select the database to replicate.
    • Transactional publication.
    • Select everything to be included. NOTE – this does not include database synonyms which will have to be scripted to the target databases manually.
    • Next screen warns about things that could go wrong. If you see ‘Objects referenced by stored procedures are required’, most likely they are synonyms.
    • Next screen sets up filters for data.
    • Snapshot agent: Create a snapshot immediately should be checked. No need to check schedule agent.
    • Next screen is security. You will have to provide both the Windows and SQL replication accounts discussed earlier.
    • Wizard actions: Create the publication.
    • Name the publication the same as the DB.
    • Hit finish. The publication will be created. Launch Replication Monitor to ensure that the replication has started successfully.
  5. Add subscribers to each publication.
    • Publication: Select the publication to subscribe to.
    • Distribution Agent Location: Run agents at the distributor.
    • Subscribers: Add subscriber. It’s okay to connect to the DR as a windows user (e.g. you don’t have to be SA right now).
      1. Once you connect to DR using the standard connection dialog, an entry in the rows will be added. Select an existing database to put this data in or create a new one using the options found in the dropdown under Subscription Database.
      2. If creating a new database, be sure to set the owner of the database to the configured SA account (SA or 4LLSA), and select Simple Logging.
    • Distribution Agent Security: Click the ellipsis button to enter the Windows and SQL usernames and passwords.
    • Synchronization Schedule: Run continuously
    • Initialize Subscriptions: Run immediately
    • Wizard Actions: Create the subscription
  6. Add any missing synonyms to the target database.
  7. Grant permission for replication services to write to a specific folder for MSSQL. Without setting this, there will be a bunch of errors with an error message similar to “could not write temp files to a folder. System returned errorcode 5” (sic).
    • Grant everyone modify permissions to the folder C:\Program Files\Microsoft SQL Server\130\COM (130=SQL 2016, 120=SQL 2014, 110=SQL 2012, etc)
  8. Set the max replicated text size on both servers. -1 means there is no limit.
  • sp_configure ‘max text repl size’, -1; Go; RECONFIGURE; GO


  • Be sure the owner of all jobs and databases are the configured SA account.
  • Be sure each publication’s Publication Access List includes both the Windows and SQL replication accounts.
  • Be sure all replicated tables have a PK defined.

So, instead of using the UI to get the pub/sub set up, add this to any database and use it:

/****** Object: StoredProcedure [dbo].[Replication_SetupSingleDatabase] Script Date: 12/15/2017 10:37:48 AM ******/
ALTER PROCEDURE [dbo].[Replication_SetupSingleDatabase]
-- Required Input
@database NVARCHAR(MAX), -- Name of database, e.g. DKDWhatever, DKD_Custom_DKD
@action NVARCHAR(8), -- either 'setup' or 'teardown', depending on what you want to do
@execute BIT, -- 0 = add the commands to the publisher commands; 1 = execute those commands instead
@publishServer NVARCHAR(MAX), -- The HOSTNAME of the server that is the publisher. IP addresses don't work. Example: DKD-SQL1
@subscribingServer NVARCHAR(MAX), -- The HOSTNAME of the server that is the subscriber. IP addresses don't work. Example: DKD-DR1
@publisherReplicationServiceAccountUsername NVARCHAR(MAX), -- Windows account. Use ServiceRunner if following default platform configurations.
@publisherReplicationServiceAccountPassword NVARCHAR(MAX), -- Windows account password. Maybe you should execute this locally or encrypted connection.
@publisherReplicationSQLAccountUsername NVARCHAR(MAX), -- SQL account.
@publisherReplicationSQLAccountPassword NVARCHAR(MAX), -- SQL account password.
@subscriberReplicationServiceAccountUsername NVARCHAR(MAX), -- Windows account. If on a domain, same account as above. If not, copy the first account user/pass for this one.
@subscriberReplicationServiceAccountPassword NVARCHAR(MAX), -- Windows account password, same as first one above if not the same account.
@subscriberReplicationSQLAccountUsername NVARCHAR(MAX), -- SQL account. Same username/password as the first SQL account.
@subscriberReplicationSQLAccountPassword NVARCHAR(MAX), -- SQL account password, same as the other one.

— Output
@publisherCommands NVARCHAR(MAX) OUTPUT,
@subscriberCommands NVARCHAR(MAX) OUTPUT,

— Optional commands
@publicationName NVARCHAR(MAX) = NULL — Name of the publication. If NULL, uses _Publish
Creates or runs the scripts needed to set up replication between 2 servers, or destroy it.
Part of the script needs to be run in the context of the subscriber server to complete setup.
Made in response to how long it takes through the UI – no way we can keep wasting 15-60 minutes each setup.

If you’re running this for the first time – READ THIS FIRST. This script makes a lot of assumptions.
– Have you even set up replication to begin with, bro?!
– The publisher’s AND subscriber Windows AND SQL replication accounts have been created – both endpoints!
– If not on a domain, use 2 users, same account username and password.
– If on a domain, use a domain account.
– Use the same password for both SQL replication accounts.
– This account does not need login privileges, but may need ‘logon as batch job’ rights.
– The naming convention is \ServiceRunner for the service account and DKDrepl as the SQL account.
– If there are any UNSAFE assemblies in any databases, the SQL replication account needs sysadmin. THIS IS BAD, btw.
– During initial setup of replication, you’re using a UNC file path to store replication data (the repldata folder).
– The windows replication service account needs read/write permissions to this share.
– Databases on subscriber already exist, and have the same names as the source databases.
– The 2 aforementioned replication accounts have dbowner privileges on the source and destination databases.
– The 2 aforementioned replication accounts also have dbowner privileges on the distribution database (under system databases).
– The firewall allows a connection between the publisher and subscriber machines.
– The publisher and the subscriber must be able to communicate through a windows hostname. IP addresses are not allowed.
– Grant everyone modify permissions to the folder C:\Program Files\Microsoft SQL Server\130\COM (130=SQL 2016, 120=SQL 2014, 110=SQL 2012, etc)
– Run ‘sp_configure ‘max text repl size’, -1; Go; RECONFIGURE; GO’ on both servers to allow LOB data to be replicated.
– Be sure all databases are owned by the configured SA account.

What this script does:
– Sets up the publication as a transactional PULL publication. Cuz that’s how we use it at DKD. Additions are welcome.
– Loops through the tables, views, procedures and functions and adds each article to the publication.
– Adds the user accounts to the publication list.
– Starts the initialization snapshot agent.
– Writes out commands to run on the subscriber to finish setup.


What works for me may not work for you. You should add your own publication and subscription, and script it out.
Then compare the script to the script created by this procedure (with @execute = 0) to see if this will do what you want it to do.
Many of the things in here could be made into options.

Cool things to future me to do for a future version:
– Set up the configuration as extended properties on the database.

20171214 DKD

/* For testing.
— The publication name
DECLARE @publicationName NVARCHAR(MAX) = ‘MCSWalletDocuments_Publish’;

— The database to replicate
DECLARE @database NVARCHAR(MAX) = ‘MCSWalletDocuments’;

— The action – setup or teardown replication
DECLARE @action NVARCHAR(MAX) = ‘teardown’;

— The server that is publishing
DECLARE @publishServer VARCHAR(MAX) = ‘DKD-SQL1’;

— The server that is subscribing
DECLARE @subscribingServer VARCHAR(MAX) = ‘DKD-DR1’;

— Should we just print out what would be run rather than running it?
DECLARE @execute BIT = 0;

— Publish-side Windows replication account
DECLARE @publisherReplicationServiceAccountUsername NVARCHAR(MAX) = ‘DKD\servicerunner’;
DECLARE @publisherReplicationServiceAccountPassword NVARCHAR(MAX) = ‘yamomma’;

— Publish-side SQL replication account
DECLARE @publisherReplicationSQLAccountUsername NVARCHAR(MAX) = ‘DKDrepl’;
DECLARE @publisherReplicationSQLAccountPassword NVARCHAR(MAX) = ‘yamomma’;

— Subcriber-side Windows replication account
DECLARE @subscriberReplicationServiceAccountUsername NVARCHAR(MAX) = ‘DKD\servicerunner’;
DECLARE @subscriberReplicationServiceAccountPassword NVARCHAR(MAX) = ‘yamomma’;

— Subscriber-side SQL replication account
DECLARE @subscriberReplicationSQLAccountUsername NVARCHAR(MAX) = ‘DKDrepl’;
DECLARE @subscriberReplicationSQLAccountPassword NVARCHAR(MAX) = ‘yamomma’;



DECLARE @verdesc VARCHAR(MAX) = ‘DKD Replication Setup v0.0.0.1’;
DECLARE @outputCommands NVARCHAR(MAX) = ”;
SET @publisherCommands = ”;
SET @subscriberCommands = ”;

— Set Defaults
SET @publicationName = COALESCE(@publicationName, @database + ‘_Publish’);

— We need a synonym to be able to read the database schema. Without this, we would only be able to
— read the current database; this means you would have to switch to the DB you want to work with first.
— So this makes things a little more flexible. Just be sure this procedure ends, that’s where the cleanup happens. 😉
SET @tmp = ‘
USE tempDB
EXEC master..sp_executesql @tmp;

— Enumerate all articles that would be replicated
PRINT ‘Enumerating objects…’;
WITH ctePrep (Obj, Val, Sch) AS
SELECT x.obj, x.Val, x.Sch FROM ctePrep x
— ugh
x.val NOT LIKE ‘MSsubscription%’
AND x.val NOT LIKE ‘MSreplication%’
AND x.val NOT LIKE ‘sp_%diagram%’
AND x.val NOT LIKE ‘syncobj_%’
AND x.val NOT LIKE ‘MSpub%’
AND x.val NOT LIKE ‘MSpeer%’
AND x.val NOT IN (‘sysarticlecolumns’,’syspublications’, ‘syssubscriptions’, ‘sysarticleupdates’, ‘systranschemas’, ‘sysreplservers’, ‘sysarticles’, ‘sysschemaarticles’, ‘sysextendedarticlesview’)

DECLARE @count_t INT;
DECLARE @count_v INT;
DECLARE @count_s INT;
DECLARE @count_f INT;
@count_t = (SELECT COUNT(*) FROM @objs WHERE T = ‘T’)
,@count_v = (SELECT COUNT(*) FROM @objs WHERE T = ‘V’)
,@count_s = (SELECT COUNT(*) FROM @objs WHERE T = ‘S’)
,@count_f = (SELECT COUNT(*) FROM @objs WHERE T = ‘F’)

PRINT CAST(@count_s + @count_t + @count_v + @count_f AS VARCHAR(MAX)) + ‘ total objects enumerated.’;
PRINT ‘ – ‘ + CAST(@count_t AS VARCHAR(MAX)) + ‘ user tables’;
PRINT ‘ – ‘ + CAST(@count_v AS VARCHAR(MAX)) + ‘ views’;
PRINT ‘ – ‘ + CAST(@count_s AS VARCHAR(MAX)) + ‘ user stored procedures’;
PRINT ‘ – ‘ + CAST(@count_f AS VARCHAR(MAX)) + ‘ user functions’;

DECLARE @startQuery VARCHAR(MAX) = ‘USE [‘ + @database + ‘];’ + CHAR(13) + CHAR(10) + ‘GO’ + CHAR(13) + CHAR(10);
DECLARE @endQuery VARCHAR(MAX) = CHAR(13) + CHAR(10) + ‘GO’;

IF (@action = ‘setup’) BEGIN
PRINT ‘Beginning setup of replication.’;

PRINT ‘Setting up publish server.’;

— This enables replication
SET @tmp = @startQuery + ‘ EXEC sp_replicationdboption @dbname = ”’ + @database + ”’, @optname = N”publish”, @value = N”true”’ + @endQuery;
IF (@execute = 0) SET @outputCommands = @outputCommands + @tmp + CHAR(13) + CHAR(10); ELSE EXEC master..sp_executesql @tmp;
PRINT ‘Replication DB option set to true.’;

— Add logreader agent
SET @tmp = @startQuery + ‘EXEC [‘ + @database + ‘].sys.sp_addlogreader_agent @job_login = N”’ + @publisherReplicationServiceAccountUsername + ”’, @job_password = ”’ + @publisherReplicationServiceAccountPassword + ”’, @publisher_security_mode = 0, @publisher_login = N”’ + @publisherReplicationSQLAccountUsername + ”’, @publisher_password = N”’ + @publisherReplicationSQLAccountPassword + ”’, @job_name = null’ + @endQuery;
IF (@execute = 0) SET @outputCommands = @outputCommands + @tmp + CHAR(13) + CHAR(10); ELSE EXEC master..sp_executesql @tmp;
PRINT ‘Added log reader agent.’;

— Add publication
SET @tmp = @startQuery + ‘EXEC sp_addpublication @publication = ”’ + @publicationName + ”’, @description = N”Transactional publication of database ‘ + @database + ‘ from Publisher ‘ + @verdesc + ‘; generated by ‘ + @verdesc + ”’, @sync_method = N”concurrent”, @retention = 0, @allow_push = N”true”, @allow_pull = N”true”, @allow_anonymous = N”true”, @enabled_for_internet = N”false”, @snapshot_in_defaultfolder = N”true”, @compress_snapshot = N”false”, @ftp_port = 21, @ftp_login = N”anonymous”, @allow_subscription_copy = N”false”, @add_to_active_directory = N”false”, @repl_freq = N”continuous”, @status = N”active”, @independent_agent = N”true”, @immediate_sync = N”true”, @allow_sync_tran = N”false”, @autogen_sync_procs = N”false”, @allow_queued_tran = N”false”, @allow_dts = N”false”, @replicate_ddl = 1, @allow_initialize_from_backup = N”false”, @enabled_for_p2p = N”false”, @enabled_for_het_sub = N”false”’ + @endQuery;
IF (@execute = 0) SET @outputCommands = @outputCommands + @tmp + CHAR(13) + CHAR(10); ELSE EXEC master..sp_executesql @tmp;
PRINT ‘Added publication.’;

— Add publication snapshot
SET @tmp = @startQuery + ‘EXEC sp_addpublication_snapshot @publication = N”’ + @publicationName + ”’, @frequency_type = 1, @frequency_interval = 0, @frequency_relative_interval = 0, @frequency_recurrence_factor = 0, @frequency_subday = 0, @frequency_subday_interval = 0, @active_start_time_of_day = 0, @active_end_time_of_day = 235959, @active_start_date = 0, @active_end_date = 0, @job_login = N”’ + @publisherReplicationServiceAccountUsername + ”’, @job_password = N”’ + @publisherReplicationServiceAccountPassword + ”’, @publisher_security_mode = 0, @publisher_login = N”’ + @publisherReplicationSQLAccountUsername + ”’, @publisher_password = N”’ + @publisherReplicationSQLAccountPassword + ”’;’ + @endQuery;
IF (@execute = 0) SET @outputCommands = @outputCommands + @tmp + CHAR(13) + CHAR(10); ELSE EXEC master..sp_executesql @tmp;
PRINT ‘Added publication snapshot.’;

FETCH NEXT FROM c INTO @type, @obj, @sch;

PRINT ‘ – Adding type ‘ + @type + ‘ named ‘ + @obj + ‘ …’;

IF (@type = ‘T’) BEGIN

SET @tmp = @startQuery + ‘EXEC sp_addarticle @publication = N”’ + @publicationName + ”’, @article = N”’ + @obj + ”’, @source_owner = N”dbo”, @source_object = N”’ + @obj + ”’, @type = N”logbased”, @description = null, @creation_script = null, @pre_creation_cmd = N”drop”, @schema_option = 0x000000000803509F, @identityrangemanagementoption = N”manual”, @destination_table = N”’ + @obj + ”’, @destination_owner = N”dbo”, @vertical_partition = N”false”, @ins_cmd = N”CALL sp_MSins_dbo’ + @obj + ”’, @del_cmd = N”CALL sp_MSdel_dbo’ + @obj + ”’, @upd_cmd = N”SCALL sp_MSupd_dbo’ + @obj + ”’, @filter_clause = NULL’ + @endQuery;
IF (@execute = 0) SET @outputCommands = @outputCommands + @tmp + CHAR(13) + CHAR(10); ELSE EXEC master..sp_executesql @tmp;
PRINT ‘ – Added article.’;

— Adding the article filter
–SET @tmp = @startQuery + ‘EXEC sp_articlefilter @publication = N”@publicationName”, @article = N”ConfigurationSettings”, @filter_name = N”FLTR_ConfigurationSettings_1__68”, @filter_clause = N”1 = 2”, @force_invalidate_snapshot = 1, @force_reinit_subscription = 1’ + @endQuery;
–IF (@execute = 0) SET @outputCommands = @outputCommands + @tmp + CHAR(13) + CHAR(10); ELSE EXEC master..sp_executesql @tmp;
–PRINT ‘ – Added article filter.’;

— Adding the article synchronization object
–SET @tmp = @startQuery + ‘EXEC sp_articleview @publication = N”’ + @publicationName + ”’, @article = N”’ + @obj + ”’, @view_name = N”SYNC_’ + @type + ‘_’ + @obj + ”’, @filter_clause = NULL, @force_invalidate_snapshot = 1, @force_reinit_subscription = 1′ + @endQuery;
–IF (@execute = 0) SET @outputCommands = @outputCommands + @tmp + CHAR(13) + CHAR(10); ELSE EXEC master..sp_executesql @tmp;
–PRINT ‘ – Added article sync object.’;


SET @tmp = @startQuery + ‘EXEC sp_addarticle @publication = N”’ + @publicationName + ”’, @article = N”’ + @obj + ”’, @source_owner = N”dbo”, @source_object = N”’ + @obj + ”’, @type = N”’ + CASE @type WHEN ‘V’ THEN ‘View’ WHEN ‘S’ THEN ‘proc’ WHEN ‘F’ THEN ‘func’ ELSE ‘unknown’ END + ‘ schema only”, @description = null, @creation_script = null, @pre_creation_cmd = N”drop”, @schema_option = 0x0000000008000001, @destination_table = N”’ + @obj + ”’, @destination_owner = N”dbo”;’ + @endQuery;
IF (@execute = 0) SET @outputCommands = @outputCommands + @tmp + CHAR(13) + CHAR(10); ELSE EXEC master..sp_executesql @tmp;
PRINT ‘ – Added article.’;


SET @c = @c + 1;
FETCH NEXT FROM c INTO @type, @obj, @sch;

PRINT ‘Processed ‘ + CAST(@c AS VARCHAR(MAX)) + ‘ articles in total.’;


— Set up subscriber on publisher
SET @tmp = @startQuery + ‘EXEC sp_addsubscription @publication = N”’ + @publicationName + ”’, @subscriber = N”’ + @subscribingServer + ”’, @destination_db = N”’ + @database + ”’, @sync_type = N”Automatic”, @subscription_type = N”pull”, @update_mode = N”read only”’ + @endQuery;
IF (@execute = 0) SET @outputCommands = @outputCommands + @tmp + CHAR(13) + CHAR(10); ELSE EXEC master..sp_executesql @tmp;
PRINT ‘Added subscription to publisher. Additional statements need to run on subscriber to complete setup.’;

— Add windows users to publication
SET @tmp = @startQuery + ‘EXEC sys.sp_grant_publication_access @publication = N”’ + @publicationName + ”’, @login = N”’ + @publisherReplicationServiceAccountUsername + ”’;’ + @endQuery;
IF (@execute = 0) SET @outputCommands = @outputCommands + @tmp + CHAR(13) + CHAR(10); ELSE EXEC master..sp_executesql @tmp;
PRINT ‘Added ‘ + @publisherReplicationServiceAccountUsername + ‘ to publication access list.’;

— Add sql users to publication
SET @tmp = @startQuery + ‘EXEC sys.sp_grant_publication_access @publication = N”’ + @publicationName + ”’, @login = N”’ + @publisherReplicationSQLAccountUsername + ”’;’ + @endQuery;
IF (@execute = 0) SET @outputCommands = @outputCommands + @tmp + CHAR(13) + CHAR(10); ELSE EXEC master..sp_executesql @tmp;
PRINT ‘Added ‘ + @publisherReplicationSQLAccountUsername + ‘ to publication access list.’;

— Get the job to start the snapshot
SET @tmp = @startQuery + ‘EXEC sp_startpublication_snapshot @publication=”’ + @publicationName + ”’;’;
IF (@execute = 0) SET @outputCommands = @outputCommands + @tmp + CHAR(13) + CHAR(10); ELSE EXEC master..sp_executesql @tmp;
PRINT ‘Started datbase publication.’;

— Add synonyms
DECLARE @synonyms VARCHAR(MAX) = ”;
WITH cteRaw AS
SELECT TOP 123456789
COALESCE(PARSENAME(base_object_name,4),@@servername) AS serverName,
COALESCE(PARSENAME(base_object_name,3),DB_NAME(DB_ID())) AS dbName,
COALESCE(PARSENAME(base_object_name,2),SCHEMA_NAME(SCHEMA_ID())) AS schemaName,
PARSENAME(base_object_name,1) AS objectName
FROM sys.synonyms
ORDER BY serverName, dbName, schemaName, objectName
SELECT @synonyms = @synonyms + CHAR(13) + CHAR(10) + ‘IF OBJECT_ID(”’ + x.schemaName + ‘.’ + x.objectName + ”’) IS NULL ‘
+ ‘CREATE SYNONYM ‘ + x.objectName + ‘ FOR [‘ + dbName + ‘].[‘ + x.schemaName + ‘].[‘ + x.objectName + ‘];’
FROM cteRaw x;
SELECT @tmp = @startQuery + @synonyms + @endQuery;
IF (@execute = 0) SET @outputCommands = @outputCommands + @tmp + CHAR(13) + CHAR(10); ELSE EXEC master..sp_executesql @tmp;
PRINT ‘Created any missing synonyms.’;

— Done – print out statements to run on publisher
PRINT ‘Row data contains statements to run on ‘ + @subscribingServer + ‘ to complete the subscriber setup.’;

SET @subscriberCommands = ”
+ ‘– Run the following statements on the subscriber to complete the subscription process.’
+ ‘USE [‘ + @database + ‘]’
+ ‘GO’
+ ‘EXEC sp_addpullsubscription @publisher = N”’ + @publishServer + ”’, @publication = N”’ + @publicationName + ”’, @publisher_db = N”’ + @database + ”’, @independent_agent = N”True”, @subscription_type = N”pull”, @description = N””, @update_mode = N”read only”, @immediate_sync = 1′
+ ‘GO’
+ ‘EXEC sp_addpullsubscription_agent @publisher = N”’ + @publishServer + ”’, @publisher_db = N”’ + @database + ”’, @publication = N”’ + @publicationName + ”’, @distributor = N”’ + @publishServer + ”’, @distributor_security_mode = 0, @distributor_login = N”’ + @subscriberReplicationSQLAccountUsername + ”’, @distributor_password = N”’ + @subscriberReplicationSQLAccountPassword + ”’, @enabled_for_syncmgr = N”False”, @frequency_type = 64, @frequency_interval = 0, @frequency_relative_interval = 0, @frequency_recurrence_factor = 0, @frequency_subday = 0, @frequency_subday_interval = 0, @active_start_time_of_day = 0, @active_end_time_of_day = 235959, @active_start_date = 20000101, @active_end_date = 99991231, @alt_snapshot_folder = N””, @working_directory = N””, @use_ftp = N”False”, @job_login = N”’ + @subscriberReplicationServiceAccountUsername + ”’, @job_password = N”’ + @subscriberReplicationServiceAccountPassword + ”’, @publication_type = 0′
+ ‘GO’

END ELSE IF (@action = ‘teardown’) BEGIN
PRINT ‘Beginning removal of replication.’;

— Dropping the transactional subscriptions
SET @tmp = @startQuery + ‘exec sp_dropsubscription @publication = N”’ + @publicationName + ”’, @subscriber = N”’ + @subscribingServer + ”’, @destination_db = N”’ + @database + ”’, @article = N”all”’ + @endQuery;
IF (@execute = 0) SET @outputCommands = @outputCommands + @tmp + CHAR(13) + CHAR(10); ELSE EXEC master..sp_executesql @tmp;
PRINT ‘Dropped subscription.’;

FETCH NEXT FROM c INTO @type, @obj, @sch;

— Drop subscription
SET @tmp = @startQuery + ‘exec sp_dropsubscription @publication = N”’ + @publicationName + ”’, @article = N”’ + @obj + ”’, @subscriber = N”all”, @destination_db = N”all”’ + @endQuery;
IF (@execute = 0) SET @outputCommands = @outputCommands + @tmp + CHAR(13) + CHAR(10); ELSE EXEC master..sp_executesql @tmp;
PRINT ‘ – Dropped subscription ‘ + @obj + ‘.’;

— Drop article
SET @tmp = @startQuery + ‘exec sp_droparticle @publication = N”’ + @publicationName + ”’, @article = N”’ + @obj + ”’, @force_invalidate_snapshot = 1′ + @endQuery;
IF (@execute = 0) SET @outputCommands = @outputCommands + @tmp + CHAR(13) + CHAR(10); ELSE EXEC master..sp_executesql @tmp;
PRINT ‘ – Dropped published article ‘ + @obj + ‘.’;

SET @c = @c + 1;
FETCH NEXT FROM c INTO @type, @obj, @sch;

PRINT ‘Processed ‘ + CAST(@c AS VARCHAR(MAX)) + ‘ articles in total.’;


— Dropping the transactional publication
SET @tmp = @startQuery + ‘exec sp_droppublication @publication = N”’ + @publicationName + ”” + @endQuery;
IF (@execute = 0) SET @outputCommands = @outputCommands + @tmp + CHAR(13) + CHAR(10); ELSE EXEC master..sp_executesql @tmp;
PRINT ‘Dropped subscription.’;

PRINT ‘Row data contains statements to run on ‘ + @subscribingServer + ‘ to complete the subscriber setup.’;


IF (@outputCommands != ”) BEGIN
PRINT ‘SQL to run on PUBLISHER: ‘;
PRINT @outputCommands;
SET @publisherCommands = @outputCommands;

— Cleanup

EXEC master..sp_executesql N’USE tempdb; DROP SYNONYM dbo.ReplSetupTmp_IS_T; DROP SYNONYM dbo.ReplSetupTmp_IS_V; DROP SYNONYM dbo.ReplSetupTmp_IS_S’;




Making date differences in SQL queries stand out

I had another employee here create a report that lists transactions from the transaction ledger for customers where the previous balance doesn’t match the balance on the next transaction, using SQL Server’s LEAD and LAG functions. To help quickly identify spans of time, next to the time I wanted something like ‘5s’, ’44m 5s’, or ‘2d 0h 45m 5s’. My eyes would automatically go to longer strings and identify those as a larger span of time.

-- User Defined Function

CREATE FUNCTION [dbo].[Humanize_DateDiff]
 @t1 DATETIME2(7),
 @t2 DATETIME2(7),
 @showOnlyItems VARCHAR(50) = NULL
 Pretty prints a time difference between 2 dates.

@showOnlyItems should be NULL to include all fields, or a comma delimited list
 (with no spaces) of any combination of D,H,M,S,MS.

This won't print any zero-unit items until at least one non-zero unit is printed. For example,
 if there are no days, then '0d ' won't be printed at all.

 SELECT dbo.Humanize_DateDiff('12/1/2016', '12/25/2016 3:45 AM', NULL); -- 24d 3h 45m 12s 678ms
 SELECT dbo.Humanize_DateDiff('12/1/2016', '12/25/2016 3:45:12.678 AM', 'd,h,m'); -- 24d 3h 45m

 20170823 DKD

/* for testing.
 DECLARE @t1 DATETIME2(7) = DATEADD(MILLISECOND, -((1000 * 60 * 60 * 24 * 3) + 4890976), SYSUTCDATETIME());
 DECLARE @showOnlyItems VARCHAR(50) = 'D,H,M,S'; -- Comma delimited, no spaces between items, only use combos of ms,s,m,h,d; NULL = show everything


 DECLARE @t_total BIGINT = 0;
 DECLARE @t_ms BIGINT, @show_ms BIT = 1;
 DECLARE @t_s INT, @show_s BIT = 1;
 DECLARE @t_m INT, @show_m BIT = 1;
 DECLARE @t_h INT, @show_h BIT = 1;
 DECLARE @t_d INT, @show_d BIT = 1;

IF (@showOnlyItems IS NOT NULL) BEGIN
 SET @showOnlyItems = ',' + @showOnlyItems + ',';
 IF (CHARINDEX(',d,', @showOnlyItems) = 0) SET @show_d = 0;
 IF (CHARINDEX(',h,', @showOnlyItems) = 0) SET @show_h = 0;
 IF (CHARINDEX(',m,', @showOnlyItems) = 0) SET @show_m = 0;
 IF (CHARINDEX(',s,', @showOnlyItems) = 0) SET @show_s = 0;
 IF (CHARINDEX(',ms,', @showOnlyItems) = 0) SET @show_ms = 0;

SET @t_total = DATEDIFF(MILLISECOND, @t1, @t2);

-- Days
 SELECT @t_d = FLOOR(@t_total / (1000 * 60 * 60 * 24));
 SELECT @t_total = @t_total - (@t_d * 1000 * 60 * 60 * 24);
 --PRINT '@t_d = ' + COALESCE(CAST(@t_d AS VARCHAR(MAX)), '') + '; @t_total = ' + COALESCE(CAST(@t_total AS VARCHAR(MAX)), '');
 IF ((@op != '' OR @t_d > 0) AND @show_d = 1) SELECT @op = @op + ' ' + CAST(@t_d AS VARCHAR(MAX)) + 'd';

-- Hours
 SELECT @t_h = FLOOR(@t_total / (1000 * 60 * 60));
 SELECT @t_total = @t_total - (@t_h * 1000 * 60 * 60);
 --PRINT '@t_h = ' + COALESCE(CAST(@t_h AS VARCHAR(MAX)), '') + '; @t_total = ' + COALESCE(CAST(@t_total AS VARCHAR(MAX)), '');
 IF ((@op != '' OR @t_h > 0) AND @show_h = 1) SELECT @op = @op + ' ' + CAST(@t_h AS VARCHAR(MAX)) + 'h';

-- Minutes
 SELECT @t_m = FLOOR(@t_total / (1000 * 60));
 SELECT @t_total = @t_total - (@t_m * 1000 * 60);
 --PRINT '@t_m = ' + COALESCE(CAST(@t_m AS VARCHAR(MAX)), '') + '; @t_total = ' + COALESCE(CAST(@t_total AS VARCHAR(MAX)), '');
 IF ((@op != '' OR @t_m > 0) AND @show_m = 1) SELECT @op = @op + ' ' + CAST(@t_m AS VARCHAR(MAX)) + 'm';

-- Seconds
 SELECT @t_s = FLOOR(@t_total / 1000);
 SELECT @t_total = @t_total - (@t_s * 1000);
 --PRINT '@t_s = ' + COALESCE(CAST(@t_s AS VARCHAR(MAX)), '') + '; @t_total = ' + COALESCE(CAST(@t_total AS VARCHAR(MAX)), '');
 IF ((@op != '' OR @t_s > 0) AND @show_s = 1) SELECT @op = @op + ' ' + CAST(@t_s AS VARCHAR(MAX)) + 's';

-- Milliseconds
 SELECT @t_ms = @t_total;
 SELECT @t_total = @t_total - (@t_ms); -- Should == ZERO now
 --PRINT '@t_ms = ' + COALESCE(CAST(@t_ms AS VARCHAR(MAX)), '') + '; @t_total = ' + COALESCE(CAST(@t_total AS VARCHAR(MAX)), '');
 IF ((@op = '' OR @t_ms > 0) AND @show_ms = 1) SELECT @op = @op + ' ' + CAST(@t_ms AS VARCHAR(MAX)) + 'ms';

SET @op = LTRIM(@op);

--SELECT @op;


Note that I didn’t have the code recalculate figures if a time format was excluded, meaning that if I were to exclude days, the hours wouldn’t reflect the number of days.


Grabbing a set and inserting ‘and’ before the last item in a set in SQL (with and without the oxford comma)

Our client wanted to display numbers that were drawn as follows:

“The numbers drawn were 01, 02, 03, 04 and 05, resulting in a win of $X.XX!”

Well, I had misinterpreted this, thinking he wanted all of the drawn numbers, but what he really wanted was just the numbers that cased the winnings to occur. The easy part was the INTERSECT statement to determine the numbers; the hard part was, because this was being inserted into an SSRS report, I needed to comma-delimit the numbers with an ‘and’ before the last item in the set. I made this snippet to get the logic right.

DECLARE @oxfordComma BIT = 0;
WITH cte1 AS
--SELECT '|' + cte1.NumbersCommaDelimited + '|' FROM cte1
, cte2 AS
, cte3 AS
SELECT LEN(Rev) - CHARINDEX(',', Rev, 1) + CASE WHEN @oxfordComma = 1 THEN 2 ELSE 1 END AS Pos, X FROM cte2
, cte4 AS
SELECT STUFF(X, Pos, CASE WHEN @oxfordComma = 1 THEN 0 ELSE 1 END, ' and') AS Y, X, Pos FROM cte3
SELECT X, Pos, Y FROM cte4

SSRS, Uncategorized

Removing elements from the flow of the document in a SSRS report

We’re designing a report that is printed and sent out to customers that has details about lottery syndicate entries. The details include the following data:

  • Details about winnings earned from the previous month, if any
  • Details about termination notice, if one was supplied
  • Details about participation in the next month, if any

The key words used here are “if any”. There’s a message that’s displayed on the report that says “Your termination notice was received on 4/28/2017 and your last month of play will be June 2017”. If they never called in to terminate their membership, this paragraph wouldn’t even show up. However, this is where the trouble begins. On the paragraph that contains the expression that generates the text in quotes above, I can add a visibility expression such as =IIF(IsNothing(First(Fields!ReceivedTerminationNoticeOn.Value, "DataSet"))), True, False), then the paragraph will be hidden. However, to introduce a CSS analogy, it’s more like visible:hidden rather than display:none. In other words, the element is hidden, but it’s not taken out of the document flow, so it makes it invisible, but takes up space and leaves a big ugly gap. These documents are printed and snail-mailed to people, so they have to look good. So, after about 15 minutes perusing StackOverflow for answers, and about 30 minutes of fooling around with SSRS, I’ve figured out how to trick SSRS into removing the element completely, shifting other page elements up.

  1. Drag a new table onto the design surface. I’m using VS 2015 and SSDT-BI 17.1, but I believe it’s worked this way since the beginning of time: A new table is created, that has 3 empty columns, and 1 row group, with 2 rows: The header, and the row ‘template’ that is repeated for each row in the dataset.
  2. Delete 2 of the columns. Now you have one column and 2 rows. We’re going to be adding something to the ‘header’ cell, since it’s only shown once, not once for each item in the dataset.
  3. Now, any element you drag into a cell is going to take up all available space in that column, meaning you can only have one element as the child element of that cell, and will resize based on the size of the table added. If you want to be able to add more elements, or position them how you like, then drag a rectangle into this header cell, then add your elements.
  4. Click in the header cell, but not in any elements in the cell, and right-click the bar on the left and select ‘Row Visibility…’, and then enter an expression that dictates whether the cell should be shown or not.
  5. Click in the ‘template’ row, then click the bar on the left and select ‘Row Visibility’. Select ‘Hide’ instead of the default ‘Show’.
  6. Size your table as you need it, and while you cannot delete the ‘template’ row, you can size it so it’s 1px high and it’s not going to show anyway since you set the row visibility to hide.

That will now collapse the table if the visibility condition isn’t met, rather than just making it ‘invisible’ and taking up space.

Caveat: The ‘elephant in the room’ is datasets. You can totally add a child table into this one, as that’s how we were able to show the results of the items. The outer table will grow to accommodate the sizing of its children. However, the dataset used by the elements inside must be the same as the outer dataset. This can obviously cause some problems if you want to have 2 tables with different datasets, but there is a way… which, by the way, is also not pretty.

Let’s say you have 2 datasets that you want to hide but you can only have one dataset as per the rules above.

Dataset1: Let’s say it’s database table ‘TableA’ and has columns A, B, and C; row with values 1, 2, 3 and row with values 4, 5, 6

Dataset2: Database table ‘TableB’ has columns D, E, and F; row with values 11, 12, 13 and row with values 14, 15, 16.

Combine them into a single dataset something like this SQL pseudocode:


On the inner table, you’ll have the columns you want to display, and you can then set a table filter to only show rows that match specific criteria, such as [Dataset] = 'TableA'. The other table would have the same thing, but it would = ‘TableB’. This way, you have multiple datasets under the guise of a single dataset. You would have to UNION all of the rows together and have NULLs in the columns that aren’t used for a particular dataset.

My dataset had the first column called ‘Dataset’ and had one row for ‘Customer’, one row for ‘Winnings_MajorLottery1’, ‘Winnings_MajorLottery2’, ‘NextDrawings_Overview’, ‘NextDrawings_LinesPlayed’. I didn’t know of an easy way in SSRS that you can find the number of rows in a table when using filters (to set the table to not display if there were not any rows), so if there are is no rows in the ‘Winnings_MajorLottery1’ “dataset”, I had a column in the “Customer” dataset called HasMajorLottery1Winnings” that was a BIT field; it was far easier to make a BIT field do my bidding rather than try to fanagle some crazy way to make it work right in SSRS.

Yes, this means you’re going to have a lot of columns – column 1 is the ‘Dataset’ column, then all of the columns from the first dataset, then all of the columns from the next dataset, which means that your query is going to have a lot of NULL entries for the columns that aren’t used for a particular dataset. You’re going to have to structure the SQL well so it’s readable in the sea of NULLs the query is going to have.

However, you may not have to go overboard for simpler reports, but the one I was working on (still am working on, who am I kidding!) was anything but simple and required some outside-of-the-box thinking.

Happy programming!

javascript, jQuery, markdown, user-content

Javascript Quick & Dirty Dynamic Numeric Evaluations From Dynamic Content

I’m writing a help page for a lottery order form to confirm to specific gaming standards. The page has to show odds for a given set of games that are offered on the order form. I made the decision to pull the content from the database, so that it can be edited from the admin site rather than us going in and editing it. I also decided that a Markdown syntax would restrict how the content was displayed yet allow a degree of power in how it was formatted.

However, there definitely was a problem in displaying things like pay tables. I had to insert a couple of tables into this content to show odds and payout factors for the games that were displayed. Here’s the table in Markdown (GFM-flavor) format. For those who don’t know what Markdown is, it is a method of writing content that is still very readable in its un-rendered form.

# Payout Factors for Different Bet Types

Game Type | Combination | Example Bet | Payout
 ------------ | ------------------------------------------- | -------------------- | --------
 Pick 2 | Straight: Any 2 digit # from 00 to 99 | 12 (exact order) | 90:1
 Pick 2 | Boxed (2-way; 2 unique digits) | 12 (any order) | 45:1
 Pick 3 | Straight: Any 3 digit # from 000 to 999 | 123 (exact order) | 900:1
 Pick 3 | Boxed (6-way; 3 unique digits) | 123 (any order) | 150:1
 Pick 3 | Boxed (3-way, 2 unique digits) | 334 (any order) | 300:1
 Pick 4 | Straight: Any 4 digit # from 0000 to 9999 | 1234 (exact order) | 9000:1
 Pick 4 | 24-Way Box (4 digit # with unique digits) | 1234 (any order) | 375:1
 Pick 4 | 12-Way Box (4 digit #, 3 unique digits) | 1233 (any order) | 750:1
 Pick 4 | 6-Way Box (4 digit #, 2 sets of 2 digits) | 3344 or 4343 (any order) | 1500:1
 Pick 4 | 4-Way Box (4 digit, 3 digits the same) | 1112 (any order) | 2250:1

The above renders into an H1 tag for the title and an HTML table for the table portion.

That’s all fine, but see where it says 90:1 for a straight bet? I don’t want a human to edit that number. Additionally, I don’t want to use an example number, since that would force the users to mentally calculate the payout factors themselves, and could possibly violate the standards we are implementing. Therefore, the actual payout factor values should be pulled from the database. There’s a number of different ways to do this, one being using something like a replacement token, e.g. {2BallPF}:1. But then, I gotta do something like {2BallPF/2}:1 for the boxed bet, which I would have to figure out on the server…. sounds like a huge pain. I could just let the users edit it every time mass payout factors are changed…

 Pick 2 | Straight: Any 2 digit # from 00 to 99 | 12 (exact order) | 70:1
Pick 2 | Boxed (2-way; 2 unique digits) | 12 (any order) | 45:1
Pick 3 | Straight: Any 3 digit # from 000 to 999 | 123 (exact order) | 700:1
Pick 3 | Boxed (6-way; 3 unique digits) | 123 (any order) | 150:1
Pick 3 | Boxed (3-way, 2 unique digits) | 334 (any order) | 300:1

NO. I can’t entrust the operators of the system to make these changes any time a payout factor update is done. And I definitely don’t want to manage that crap. It always happens on a Friday night when I am relaxing, or any time on Saturday or Sunday. I really value my free time, even if it’s to do absolutely nothing.

Would a framework like angular.js, react.js, durandal, etc. work for what I’m trying to do? Possibly, but I wasn’t interested in the learning curve (at this moment, mind you, although learning at least one or two of these frameworks sounds like fun), or the weight. All I wanted to do was something like take a token, replace it with its value, and evaluate anything as an expression. I thought to myself “man, a ‘data-‘ style attribute would be really nice about now”. I then remembered that Markdown allows HTML to be inserted.

So this is what I did! I made some example output in jsbin:

<!DOCTYPE html>
<script src="//code.jquery.com/jquery-1.11.1.min.js"></script>
 <meta charset="utf-8">
 <title>JS Bin</title>
<input type="hidden" id="max2" value="90" />
 <input type="hidden" id="max3" value="900" />
 <input type="hidden" id="max4" value="9000" />
 <input type="hidden" id="mx" value="6" />
 This is a test to see how:<br /><br />
 <span data-fix="{max2}">x</span>
 / <span data-fix="{mx}">16</span> = <span data-fix="{max2} / {mx}">y</span>.<br><br>
 <span data-fix="{max3}">x</span>
 / <span data-fix="{mx}">16</span> = <span data-fix="{max3} / {mx}">y</span>.<br><br>
 <span data-fix="{max4}">x</span>
 / <span data-fix="{mx}">16</span> = <input type="text" data-fix="{max4} / {mx}" value="?" />.<br><br>

So for elements that I want to dynamically render a mathematical value, I enclose them in a span tag with an attribute of data-fix. This contains the formula, which is actually something that eval() can parse in Javascript. A bit dangerous, if someone were to get into the admin and change a value to something destructive, but then they could do a heck of a lot more than just add stuff for eval() statements to hit.

A little javascript magic goes a long way:

 var regex = /\{[\w\d\-_]+?\}/gim;
 var that = $(this);
 var formula = that.attr("data-fix");
 //console.log("formula = " + formula);
 var rendered = formula.replace(regex, function(match, capture) {
 var s = match.replace("{", "").replace("}", "");
 //console.log("-- s = " + s);
 var elem = document.getElementById(s);
 //console.log("-- tag = " + elem.tagName);
 if (elem == null || elem === undefined)
 return "null";
 var value = $(elem).val() | $(elem).html();
 return value;
 //console.log("rendered = " + rendered);
 var final = eval(rendered);
 //console.log("final = " + final);
 if (this.tagName.toLowerCase() == "input" || this.tagName.toLowerCase() == "select")

What’s going on here is that I’m grabbing all instances of any element that has the data-fix attribute, grabbing the “template” formula, doing any replacing in it as needed, doing an eval() on the final formula, and then inserting the value back into the element, one time.

Check out the JS Bin for more details and to play with it!


Detecting changes in a table

Suppose I want to sync with my clients a bunch of information when the program has started and at specific intervals throughout the day. However, I don’t want the server to send over a huge list of all the information every single time it is requested, as it’s a waste of bandwidth and time. I should mention that a lot of clients I work with have spotty internet connections, so it’s important that the apps function well offline as well as transfer only the data that is needed to be transferred.)

How do I know that the client has the latest updated copy of data from the database? We do this by generating a checksum of the data that we would transfer. The client would send over its last known checksum value, or NULL if it does not have one stored, and the server only pushes data over if the checksum value passed back to the server is NULL or differs from the checksum the server generates. It’s ridiculously fast, at least for the data sets I send over – a few hundred to a few thousand records per set.

Here are some functions I created to get the data that needs to be synced:

CREATE FUNCTION FranchiseInfo_Franchises (@franchiseID INT) AS ...<br />CREATE FUNCTION FranchiseInfo_Items (@franchiseID INT) AS ...<br />CREATE FUNCTION FranchiseInfo_Staff (@franchiseID INT) AS ...<br />CREATE FUNCTION FranchiseInfo_Locations (@franchiseID INT) AS ...<br />CREATE FUNCTION FranchiseInfo_PrepaidCards (@franchiseID INT) AS ...

Et cetera.

Each function returns columns of data that I push to the client. You don’t specifically NEED to use table-valued functions, but in this scenario it definitely makes it easier to do the following steps. (Specifically, if I had used just stored procs, I would need to copy and paste the same SQL used for the procs in the checksum procedure, which reduces maintainability.)

What we want is some sort of a checksum that would change when the data in the table has changed.

CREATE PROCEDURE FranchiseInfo_GetChecksums
@franchiseID INT
SELECT 'Franchise' AS DataType, (SELECT CHECKSUM_AGG(BINARY_CHECKSUM(*)) FROM dbo.FranchiseInfo_Franchises(@franchiseID)) AS [Checksum]
UNION SELECT 'Items', (SELECT CHECKSUM_AGG(BINARY_CHECKSUM(*)) FROM dbo.FranchiseInfo_Items(@franchiseID))
UNION SELECT 'Locations', (SELECT CHECKSUM_AGG(BINARY_CHECKSUM(*)) FROM dbo.FranchiseInfo_Locations(@franchiseID))<br /> UNION SELECT 'PrepaidCards', (SELECT CHECKSUM_AGG(BINARY_CHECKSUM(*)) FROM dbo.FranchiseInfo_PrepaidCards(@franchiseID))<br /> UNION SELECT 'Staff', (SELECT CHECKSUM_AGG(BINARY_CHECKSUM(*)) FROM dbo.FranchiseInfo_Staff(@franchiseID))<br /> ;

(Later I changed it so each one returned the checksum value in a column, so the procedure would return only one row with the checksum values. It was much easier to do that, rather than parse the row values. I’ll leave this as an exercise to the reader.)

BINARY_CHECKSUM() takes in a list of columns, in this case, the columns returned from the stored procedure. Yes, using * for columns is frowned upon but I don’t care, as it makes it easier to update the functions and know that it’s including that in the aggregation. CHECKSUM_AGG aggregates the other checksums into one checksum.

In my client code, I provide a list of the checksums I already have – as Nullable<long> / long? datatype, and if I have the checksum stored, I pass it in, otherwise, I leave the value as NULL. The server calls the function above and compares the checksums returned. If the checksum is different than what was passed in, then I call the appropriate procedure to return updated data to the client. This way, I’m only passing down the data set only if it has changed.


Using a CTE in SQL to update and delete anything over then Nth row for a given grouping of fields, a.k.a. Building a Token Management System

I learned a new trick for dealing with deleting rows after the nth index from a table. First, some backstory:

I’m in the gaming industry with clients overseas, and our client has an online casino and lottery house that runs games like slots, blackjack and poker. To launch these games, we use a token system. The customer launches a URL with a GUID token, which is immediately swapped once the game loads. The client continues to request tokens to perform actions behind the scenes, and the token must continually change to prevent someone who got access to a token – either by random guess, or worse.

Token rotation systems aren’t the simplest thing in the world – the token system has to handle multiple tokens for a given customer, because if the client application sends a request to our server that is processed successfully, changing the token, but the client never gets a response, the client is left with an old token and cannot reference that particular customer anymore. The system I set up for this vendor manages 5 tokens simultaneously, with them expiring after 10 minutes. My token table looks like this:

ID	CreatedOn	CustomerID	GamingVendorID	Token	CountOfUses	LastUsedOn	IsExpired
321	2014-02-13 12:09:48.337	1151	4	73F636BC-F6C4-4301-83A3-3A24744E0C65	1	2014-02-13 11:59:48.347	0
324	2014-02-13 11:59:48.363	1151	4	12C33AB2-44CA-4946-9CDC-AE0934FA8DBC	0	NULL	0

However, in production there will be tons of tokens, and each vendor is configured to allow a maximum number of tokens simultaneously – this vendor has 5. Without some sort of cleanup routine, there will be a lot more than 5 unexpired tokens in the table. I have a stored procedure to clean up these excess tokens, but since it has to run on every call to validate and rotate a token, I can’t have it look over the data by customer ID and vendor ID, as it would be too slow. I also didn’t want to clean up just the requesting vendor’s and customer’s tokens and leave everything else alone; I wanted the table to have as few rows as possible. So, how can it be done with a single batch and no looping, temporary tables or cursors?

Enter the Common Table Expression (CTE):

WITH cteTokens AS
			ROW_NUMBER() OVER (PARTITION BY g.ID, t.CustomerID ORDER BY t.CreatedOn DESC) AS RowNumber,
			g.[AllowedNumberOfTokensKeptBesidesCurrentToken] AS CountToKeep
		FROM CustomerRotatingTokens t
		LEFT OUTER JOIN GamingVendors g ON (g.ID = t.GamingVendorID)
			t.CountOfUses &gt;= g.[AllowedNumberOfTokenReuses]
			OR ([TokensExpireAfterNumberOfSeconds] IS NOT NULL AND DATEADD(SECOND, g.[TokensExpireAfterNumberOfSeconds], t.CreatedOn) &lt; GETDATE()) 	), 	cteTokensToExpire AS 	( 		SELECT ID FROM cteTokens WHERE RowNumber &gt; CountToKeep
	UPDATE CustomerRotatingTokens SET IsExpired = 1 WHERE ID IN (SELECT ID FROM cteTokensToExpire);

Here’s an overview of the interesting parts of the code:

  • WITH cteTokens AS – this names our expression (think of it as a table alias)
  • ROW_NUMBER() OVER (PARTITION BY g.ID, t.CustomerID ORDER BY t.CreatedOn DESC) AS RowNumber – this adds a sequential row number to each row, starting at 1. If you specify ROW_NUMBER() OVER (ORDER BY t.CreatedOn DESC) (this is the bare minimum syntax needed to do this), it will add a sequential number to each row. We need to delete any tokens after the 5th newest token, so we need the tokens to be “grouped” by the vendor’s ID and the customer ID. That’s where the PARTITION part comes in – think if it almost like a GROUP BY statement so ROW_NUMBER() starts counting over when it reaches a new combination of those fields.
  • cteTokensToExpire AS – this next CTE selects a new table containing only those tokens whose index (the ROW_NUMBER()) is greater than what the vendor is allowed to keep – 5.
  • UPDATE… WHERE ID IN (SELECT ID FROM cteTokensToExpire) – expires any tokens that were selected by cteTokensToExpire.

Part of the client requirements is that we differentiate between invalid and expired tokens, so I can’t just delete them from the table immediately, but I have another CTE that blows out expired rows that are a bit older than when they expired.

The speed of the CTE is amazing. It’s most definitely faster than building a temporary table to aggregate the results and then expire and delete the rows, which would not have flown since this has to run a LOT.