T-SQL Tuesday

T-SQL Tuesday #53 Round-up

Logo for T-SQL Tuesday

Welcome back, folks! We had 10 submissions for this month’s TSQL Tuesday, which was based around little tricks inside of SQL Server. Without any further ado, and in no particular order, our entrants:

Riddler, Harley Quinn and the Joker by Pat Loika, on Flickr

The Riddler, Harley Quinn, and the Joker

Massive thanks to all those that participated – and if I missed you, just shoot me an email or a tweet and I’ll get that corrected pronto.

Standard
T-SQL Tuesday

T-SQL Tuesday #53 – Why So Serious? (Announcement)

Logo for T-SQL Tuesday

T-SQL Tuesday is coming up on April 8th, and Adam has given me the honor of choosing a topic for our next blogging party for a second time. Given that this announcement is occurring on April Fools’ Day, I figure it’s a perfect time for a theme:

Dirty Little Tricks

The Joker

The Joker

Your mission, should you chose to accept it, is to come up with a piece of T-SQL that could mess with your coworkers. And if you’re committed to the fun, implement it!

Need some ideas? Maybe you could…

  • Write a lengthy script that returns a funny or annoying message
  • Implement hilarious login triggers
  • Use Resource Govenor to limit your victims developers

Honestly the sky is the limit. Just be sure to blog about the results of your trickery if you do pull it off.

Special Warning

Folks, don’t get yourself fired. This is meant to be fun. But you know your work environment better than I do. If your job has a stick up its rear-end, just write your tricks down and keep them up your sleeve for when you’re working at a more relaxed place.

And I shouldn’t have to say it, but don’t do this sh!t in Production.

The Rules of the Game

  1. Your post must be published between 00:00 GMT Tuesday, April 8, 2014 and 00:00 GMT Wednesday, April 9, 2014.
  2. Include the T-SQL Tuesday logo at the top of your post, and have it link back to this announcement.
  3. Leave a comment here with the link to your post. It makes it easier for me to find who is participating.
  4. If you’d like to host a T-SQL Tuesday of your own, review the historical list of topics, and reach out to Adam Machanic for scheduling.

Good luck on your tricks! Don’t get fired!

Harley Quinn

Harley Quinn

Standard
T-SQL Tuesday

T-SQL Tuesday 51: Bad Bets

Logo for T-SQL Tuesday

T-SQL Tuesday

It’s been an incredibly long time since I last blogged, but this month’s T-SQL Tuesday topic was so juicy, I had to throw down. This month’s event is being hosted by Jason Brimhall and is on the topic of risky bets in IT.

I’m sure I won’t have the best or most outrageous stories, but I do have some fun ones to share. These are all small anecdotes from different employers and consulting situations, and I am going to keep details as anonymous as I can to protect the innocent. In no particular order…

  • Internet Troubles

    When I first became a DBA, I was tipped off that if SQL Server was having trouble, I could restart IIS to fix it.

    Obviously I got better information.

  • RAIDless

    In one completely virtualized environment, I eventually discovered that the underlying storage supporting all the VMs was a single RAID 0 configuration.

    My memos and pleas went unheeded.

  • RAID Master

    In a different virtualized environment, the sysadmin set up all the underlying storage in a RAID 6 configuration without asking for input from anyone else. At least we’ve got extra parity protection…

    Pretty sure that RAID 6 is still chugging.

  • Size Queen

    On one longer term contract, I was taking a look at database configurations. I discovered one that had a data file only 3MB, but its log file had grown to 65,000 times that size (roughly 190GB).

    A pretty common, though egregious occurrence, of a database in FULL recovery without log backups. Backed up the log, shrunk it, put the database in SIMPLE recovery, got a coffee.

  • Copy-what now?

    I lied, I saved the best for last. I had to quit a job over this one.

    I was working freelance when the owner of the company told me about a new project – an eCommerce site for equipment operation manuals. As I am given the files, I realize that the operation manuals in question belong to famous companies for products still in production. Manuals one could download from the internet freely via product support pages.

    I mention this to the owner, but no changes are made to the project. I inform the owner that we can’t take on the project. The manuals are copyright to these corporations, and we cannot distribute the files – much less sell them – without permission from these corporations. The owner nods in understanding.

    The following day, the owner mentions to me that the client does have letters of permission from all these corporations (well over 100 companies), and that the owner has seen them. I ask to see them, but the owner doesn’t actually have them on hand. I still can’t start on the project until I’ve seen the letters.

    Nothing has moved forward in a few days’ time. The owner flat out inquired, “Are you serious that you won’t do this project?!”

    Unphased, “Correct, I will not complete this project as it’s against the law.”

    “But my lawyer says it’s fine,” the owner countered.

    “And my lawyer says it’s illegal.”

    Silence.

    “Well I can’t keep paying you if you won’t do this project,” the owner mentioned. An ultimatum.

    “I understand,” I said. “I can finish out this week while you find someone else to take over.”

    Thankfully I already had another job prospect that was just about to land at that point.

Those are my anecdotes! I hope you enjoyed them! Check out the other T-SQL Tuesday posts in the comments section of Jason’s blog!

Standard
SQL Server

A Replication Experiment

The other day at work, the following question came up: if you initialize replication from a backup, can you drop tables that don’t belong to the publication (IE those that are not articles) without breaking replication? TL;DR Yes.

Why did this question come up? Actually, there was a good reason—data security. After we initialized the subscription via a backup, since a snapshot would be large and difficult to perform, we’d like to get rid of the data that isn’t going to be used in the replication solution. Dropping those extraneous tables would help prevent unauthorized data access, and would keep the subscriber focused on only the data that we needed. It would also allow for other management benefits, such as a smaller database size. (Or the inverse, more space for replicated transactions!)

I’m rather new to replication as a whole, and so I wasn’t sure if it was possible, or if it would break the replication given that you’d be creating some transactions on the subscriber. So I did the following experiment to prove that you could drop non-replicated tables without affecting the movement of transactions from publisher to subscriber.

Setting up our Publisher

I began by setting up a new database, ReplOrigin, with two tables inside of it, NonReplData and ReplData. Yes, all those names are pretty loaded in meaning – ReplOrigin is meant to be my publisher, NonReplData will hold a small amount of data that is not included in the publication (but will be restored by the backup), and ReplData will be the table that is included as an article. While we control the data entering into NonReplData, we’re simply generating 10,000 random-ish rows for the ReplData table.

-- Run this part to set up the ReplOrigin database
USE master;
GO

IF NOT EXISTS (
	SELECT d.name
	FROM sys.databases d
	WHERE d.name = 'ReplOrigin'
)
	CREATE DATABASE ReplOrigin;
GO

USE ReplOrigin;
GO

IF NOT EXISTS (
	SELECT s.name + '.' + t.name
	FROM sys.schemas s
	JOIN sys.tables t
		ON s.schema_id = t.schema_id
	WHERE s.name = 'dbo'
		AND t.name = 'NonReplData'
)
	CREATE TABLE dbo.NonReplData (
		Id INT NOT NULL IDENTITY PRIMARY KEY,
		DateCreated DATE NOT NULL DEFAULT GETDATE(),
		Name VARCHAR(40) NOT NULL
	);

IF NOT EXISTS (
	SELECT s.name + '.' + t.name
	FROM sys.schemas s
	JOIN sys.tables t
		ON s.schema_id = t.schema_id
	WHERE s.name = 'dbo'
		AND t.name = 'ReplData'
)
	CREATE TABLE dbo.ReplData (
		Id INT NOT NULL IDENTITY PRIMARY KEY,
		DateProcessed DATETIME NOT NULL DEFAULT GETDATE(),
		Value INT NOT NULL
	);
GO

BEGIN TRAN;
	INSERT INTO dbo.NonReplData (Name)
	VALUES
		('Kendra Little'),
		('Matthew Velic'),
		('Jeremiah Peschka');
COMMIT;

BEGIN TRAN;
	INSERT INTO dbo.ReplData (Value)
	SELECT TOP 10000
		CAST(RAND(CHECKSUM(NEWID())) * 1000000 + 1 AS INT) AS Value
	FROM sys.all_columns c1
	CROSS JOIN sys.all_columns c2;
COMMIT;
GO

SELECT Id, DateCreated, Name
FROM dbo.NonReplData;

SELECT COUNT(*) AS ReplOriginDataCount
FROM dbo.ReplData;

You can see in the following screenshots that I now have ReplOrigin listed as a database, and that I have some results from my initial table inserts.

ReplOrigin added to databases.

ReplOrigin added to databases.


Our initial data results.

Our initial data results.

Setting up Replication

While I initially used the GUI to help guide my work, I generated a script that I included here, albeit formatted more vertically. What’s notable is that I’ve set the @allow_initialize_from_backup argument equal to true in the sp_addpublication procedure near the bottom of the setup script. In order to run this in your test or personal lab environment, you’ll have to change the following details; I’d recommend a Find-and-Replace.

  • I’m using my computer, Matt-Rig, as a local distributor.
  • My data folder resides in S:\MSSQL\Data.
  • My transaction logs reside in S:\MSSQL\TLogs.
  • I created my snapshot folder at S:\MSSQL\ReplData.
-- Doing the replication bits
-- Installing the server as a Distributor.
USE master;
GO

EXECUTE sp_adddistributor
	@distributor = N'MATT-RIG',
	@password = N'';
GO

EXECUTE sp_adddistributiondb
	@database = N'distribution',
	@data_folder = N'S:\MSSQL\Data',
	@log_folder = N'S:\MSSQL\TLogs',
	@log_file_size = 2,
	@min_distretention = 0,
	@max_distretention = 72,
	@history_retention = 48,
	@security_mode = 1
GO

USE [distribution];
GO

IF NOT EXISTS (
	SELECT *
	FROM sysobjects
	WHERE name = 'UIProperties'
		AND type = 'U '
)
BEGIN;
	CREATE TABLE UIProperties (
		id INT
	);
END;
 
IF EXISTS (
	SELECT *
	FROM ::fn_listextendedproperty('SnapshotFolder', 'user', 'dbo', 'table', 'UIProperties', null, null)
)
	EXECUTE sp_updateextendedproperty
		N'SnapshotFolder',
		N'S:\MSSQL\ReplData',
		'user',
		dbo,
		'table',
		'UIProperties';
ELSE 
	EXECUTE sp_addextendedproperty
		N'SnapshotFolder',
		N'S:\MSSQL\ReplData',
		'user',
		dbo,
		'table',
		'UIProperties';
GO

EXECUTE sp_adddistpublisher
	@publisher = N'Matt-Rig',
	@distribution_db = N'distribution',
	@security_mode = 1,
	@working_directory = N'S:\MSSQL\ReplData',
	@trusted = N'false',
	@thirdparty_flag = 0,
	@publisher_type = N'MSSQLSERVER';
GO

USE [ReplOrigin];
GO

EXECUTE sp_replicationdboption
	@dbname = N'ReplOrigin',
	@optname = N'publish',
	@value = N'true';
GO

-- Adding the transactional publication
USE [ReplOrigin];
GO

EXECUTE sp_addpublication
	@publication = N'ReplOriginPub',
	@description = N'Transactional publication of database ''ReplOrigin'' from Publisher ''MATT-RIG''.',
	@sync_method = N'concurrent',
	@retention = 0,
	@allow_push = N'true',
	@allow_pull = N'true',
	@allow_anonymous = N'true',
	@enabled_for_internet = N'false',
	@snapshot_in_defaultfolder = N'true',
	@compress_snapshot = N'false',
	@ftp_port = 21,
	@ftp_login = N'anonymous', 
	@allow_subscription_copy = N'false', 
	@add_to_active_directory = N'false', 
	@repl_freq = N'continuous', 
	@status = N'active', 
	@independent_agent = N'true', 
	@immediate_sync = N'true', 
	@allow_sync_tran = N'false', 
	@autogen_sync_procs = N'false', 
	@allow_queued_tran = N'false', 
	@allow_dts = N'false', 
	@replicate_ddl = 1, 
	@allow_initialize_from_backup = N'true',
	@enabled_for_p2p = N'false', 
	@enabled_for_het_sub = N'false';
GO

EXECUTE sp_addpublication_snapshot
	@publication = N'ReplOriginPub',
	@frequency_type = 1,
	@frequency_interval = 0,
	@frequency_relative_interval = 0,
	@frequency_recurrence_factor = 0,
	@frequency_subday = 0,
	@frequency_subday_interval = 0,
	@active_start_time_of_day = 0,
	@active_end_time_of_day = 235959,
	@active_start_date = 0,
	@active_end_date = 0,
	@job_login = null,
	@job_password = null,
	@publisher_security_mode = 1;

USE [ReplOrigin];
GO

EXECUTE sp_addarticle
	@publication = N'ReplOriginPub',
	@article = N'ReplDataTrans',
	@source_owner = N'dbo',
	@source_object = N'ReplData',
	@type = N'logbased',
	@description = null,
	@creation_script = null,
	@pre_creation_cmd = N'drop',
	@schema_option = 0x000000000803509F,
	@identityrangemanagementoption = N'manual',
	@destination_table = N'ReplData',
	@destination_owner = N'dbo',
	@vertical_partition = N'false',
	@ins_cmd = N'CALL sp_MSins_dboReplData',
	@del_cmd = N'CALL sp_MSdel_dboReplData',
	@upd_cmd = N'SCALL sp_MSupd_dboReplData';
GO

After running the replication script, you should receive a confirmation along the lines of the following:

Many multitudes of messages from our replication script.

Many multitudes of messages from our replication script.

By refreshing and expanding the Replication folder in the Object Explorer, you’ll see the new Local Publication that we created, but you’ll notice that we haven’t subscribed anything to it yet.

Our new local publication.

Our new local publication.

Also, if you refresh your System Databases folder, you’ll see the brand new distribution database.

Lookie, it's the distribution database!

Lookie, it’s the distribution database!

Backing up and Restoring

At this point, we can backup our ReplOrigin database, and then use that backup to restore to our new database, ReplDest, which will act as a subscriber. Again, here’s the pertinent parts of the script you’ll need to change in order to get it to run in your non-production lab.

  • My backup folder resides in S:\MSSQL\Backup.
  • My datafolder resides in S:\MSSQL\Data.
  • My transaction logs reside in S:\MSSQL\TLogs.
-- Creating our backup to initialize the new subscriber
USE master;
GO

BACKUP DATABASE ReplOrigin
TO DISK = 'S:\MSSQL\Backup\ReplOrigin_FULL.bak'
	WITH CHECKSUM;

-- Create a new database, our subscriber, from the backup
RESTORE DATABASE ReplDest
FROM DISK = 'S:\MSSQL\Backup\ReplOrigin_FULL.bak'
	WITH CHECKSUM, 
	MOVE 'ReplOrigin' TO 'S:\MSSQL\Data\ReplDest.mdf',
	MOVE 'ReplOrigin_log' TO 'S:\MSSQL\TLogs\ReplDest_log.ldf';

After you’ve restored, you’ll note that the ReplDest database has been added to the Object Explorer.

A subscriber is born!

A subscriber is born!

Subscribing to the Publisher

At this point, we haven’t done anything special. We’ve created our initial database, created a publisher, subscription, and an article. We’ve used a backup to “initialize our subscriber” but we haven’t actually added ReplDest as a subscriber yet. We can do this now. You should note that for the @sync_type I’ve provided initialize from backup and I’ve included the pertinent backup information for the @backupdevicetype and @backupdevicename arguments. Again, be sure to change S:\MSSQL\Backup to your own backup location.

-- Create a new subscription at the Publisher (ReplOrigin)
USE ReplOrigin;
GO

EXECUTE sp_addsubscription
	@publication = 'ReplOriginPub',
	@article = 'all',
	@subscriber = 'Matt-Rig',
	@destination_db = 'ReplDest',
	@sync_type = 'initialize with backup',
	@status = 'active',
	@update_mode = 'read only',
	@backupdevicetype = 'disk',
	@backupdevicename = 'S:\MSSQL\Backup\ReplOrigin_FULL.bak';
GO

Refreshing the Replication folder in the Object Explorer, you’ll see that we’ve created a subscriber to our publication.

Subscribing to our publication using a backup.

Subscribing to our publication using a backup.

We can now test our replication by generating new transactions in the ReplData table in the ReplOrigin database. Then we can count the tables in each database to ensure that the replication is moving those rows. (Granted, you may need to wait a few seconds for the change to occur.)

-- Generating more transactions to test the replication
USE ReplOrigin;
GO

BEGIN TRAN;
	INSERT INTO dbo.ReplData (Value)
	SELECT TOP 10000
		CAST(RAND(CHECKSUM(NEWID())) * 1000000 + 1 AS INT) AS Value
	FROM sys.all_columns c1
	CROSS JOIN sys.all_columns c2;
COMMIT;
GO

SELECT COUNT(*) AS ReplOriginDataCount
FROM dbo.ReplData;

USE ReplDest;
GO

SELECT COUNT(*) AS ReplDestDataCount
FROM dbo.ReplData;

Comparing those counts, we can see that the new rows have been replicated from ReplOrigin to ReplDest as expected.

Our initial results.

Our initial results.

Dropping Non-Replicated Tables

We’ve finally reached our experiment phase! Running the following script with drop the NonReplData table from the ReplDest database.

-- Testing dropping the Non-Replication table from ReplDest
USE ReplDest;
GO

IF EXISTS (
	SELECT s.name + '.' + t.name
	FROM sys.schemas s
	JOIN sys.tables t
		ON s.schema_id = t.schema_id
	WHERE s.name = 'dbo'
		AND t.name = 'NonReplData'
)
	DROP TABLE dbo.NonReplData;
GO

Once that has completed, you can generate another batch of transactions to be replicated from ReplOrigin, again counting the tables to ensure that they are being replicated.

-- Adding more transactions to our replication database to see if it works since we changed the underlying schema somewhat
USE ReplOrigin;
GO

BEGIN TRAN;
	INSERT INTO dbo.ReplData (Value)
	SELECT TOP 10000
		CAST(RAND(CHECKSUM(NEWID())) * 1000000 + 1 AS INT) AS Value
	FROM sys.all_columns c1
	CROSS JOIN sys.all_columns c2;
COMMIT;
GO

SELECT COUNT(*) AS ReplOriginDataCount
FROM dbo.ReplData;

USE ReplDest;
GO

SELECT COUNT(*) AS ReplDestDataCount
FROM dbo.ReplData;
Our final, post-DROP results; success!

Our final, post-DROP results; success!

Success! We can delete the non-replicated tables without affecting or breaking the replication, which means we can provide a higher level of data security and appropriate access after we’ve initialized from a backup.

Once you are finished testing, you’ll want to clean up your lab. Then you’ll be able to run the whole thing over again, if you’d like. Again, be sure to change Matt-Rig to your appropriate publisher/subscriber.

-- Cleanup
USE ReplOrigin;
GO

EXECUTE sp_dropsubscription 
  @publication = 'ReplOriginPub', 
  @article = N'all',
  @subscriber = 'MATT-RIG';
GO

EXECUTE sp_dropsubscriber @subscriber = 'Matt-Rig';
GO

EXECUTE sp_droppublication @publication = 'ReplOriginPub';
GO

EXECUTE sp_replicationdboption 
	@dbname = 'ReplOrigin', 
	@optname = 'publish', 
	@value = 'false';
GO

EXECUTE sp_dropdistpublisher @publisher = 'Matt-Rig';
GO

EXECUTE sp_dropdistributiondb @database = 'distribution';
GO

EXECUTE sp_dropdistributor;
GO

USE master;
GO

IF EXISTS (
	SELECT d.name
	FROM sys.databases d
	WHERE d.name = 'ReplDest'
)
	DROP DATABASE ReplDest;
GO

IF EXISTS (
	SELECT d.name
	FROM sys.databases d
	WHERE d.name = 'ReplOrigin'
)
	DROP DATABASE ReplOrigin;
GO

As I mentioned, I’m still new when it comes to replication, but I was glad I was able to complete a small, contained experiment in order to answer a business related question. Even more than that, I got to learn a bunch of new (to me) replication related procedures, which is the best part because I love to do as much using T-SQL as I can.

Standard