SQL Server script to populate a table of sequential dates in multiple formats

No Comments

I was writing lots of TSQL today for something I was working on, which in itself is a rather rare occasion since I do so much more infrastructure and architecture stuff nowadays. I needed to write a script to populate a table with a series of rows containing sequential dates. I had to create a table which had 2 columns in it, one was [date] datatype and the other needed to be a 6 character string representing the year and the month as numbers concatenated together, for example 15th March 2012 would become 201215.

Then this table needed several rows covering every day over a number of years. Here’s the code just in case anyone fancies reusing it.

create table #dates
(
CustomMonth char(6),
lookupdate date
)

declare @date_increment date
--change your start date here if you fancy
set @date_increment = '2005-01-01'

--change your end date here if you fancy
while @date_increment < '2012-12-31'
begin

set @date_increment = DATEADD(day,1,@date_increment)

insert #dates (CustomMonth, lookupdate)
select convert(char(4),(datepart(year,@date_increment)))
	+ RIGHT('0' + CONVERT(VARCHAR(2), DATEPART(MM, @date_increment)), 2),
@date_increment

end

--check it looks ok
select * from #dates

Create automated PGP task in SSIS using GnuPG to decrypt files

1 Comment

Previously I wrote about my efforts to automate the decryption of files with SSIS using the gpg2.exe which comes as part of the GnuPG package. The original article is here

SSIS Task to decrypt PGP files automatically

However after deploying the working solution into production, to be run as a scheduled task, I found out that this package and solution still had some issues. I found that it was behaving rather differently when it was deployed into the production environment as opposed to running in the BIDS environment. When executing the exact same code in production which worked processing the exact same files in development (and I mean the exact same, same account, same files, same everything) I got error which looked like this (sanitised for security)

Error: 2012-03-13 11:16:07.10 Code: 0xC0029151 Source: run decrypt Execute Process Task Description: In Executing "C:\Program Files (x86)\GNU\GnuPG\gpg2.exe" "--batch --passphrase-fd 0 --decrypt-files [myfilename]" at "", The process exit code was "2" while the expected was "0". End Error DTExec: The package execution returned DTSER_FAILURE (1). Started: 11:16:02 Finished: 11:16:07 Elapsed: 4.609 seconds. The package execution failed. The step failed.

So I was again getting error code 2 which I had previously, for which the –batch switch had previously resolved the issue in development. So the error code was the same, but the reason obviously had to be different now. This required a little more investigation to get to the bottom of. Firstly I ran Process Monitor, which is often my first port of call in such scenarios, to check whether I was hitting some obscure permissions errors when running in live with the SQL Agent. It turned out totally clean (As an aisde I had done the same when initially installing GnuPG to resolve and issue that it couldn’t access a temp directory it required to do decryption).

A bit of research through the web and the full documentation of GnuPG left me using a further switch:

--status

which allowed me to look at some of the status messages from the output which were previously being swallowed by the SSIS task when run in production. There was SSIS logging enabled but it wasn’t getting anything back from the gpg2 executable beyond the status code.

I used a couple of different versions of this switch which looked like this

gpg2 --batch --status-fd 2 c:\gk\test\output.txt --decrypt-files test.gpg

which outputs the status messages to c:\gk\test\output.txt, or you can do this

gpg2 --batch --status-fd 2 --decrypt-files test.gpg

which outputs the messages to the console

Either way you end up with the following output (again slightly sanitised)

[GNUPG:] FILE_START 3 test.gpg
[GNUPG:] ENC_TO [hexstring] 1 0
[GNUPG:] USERID_HINT [hexstring] [mykeyname] (mykeyname) &lt;myemail@email.com&gt;
[GNUPG:] NEED_PASSPHRASE [hexstring] [hexstring] 1 0
[GNUPG:] GOOD_PASSPHRASE
[GNUPG:] BEGIN_DECRYPTION
[GNUPG:] PLAINTEXT 62 [integer] test_.txt
[GNUPG:] PLAINTEXT_LENGTH 901226
[GNUPG:] DECRYPTION_OKAY
[GNUPG:] GOODMDC
[GNUPG:] END_DECRYPTION
[GNUPG:] FILE_DONE

but unfortunately this still didn’t give me anything to go on, as it still worked in the test environment, but not in the production one. Eventually by playing with the logging levels and these switches in production I got the details out in the SSIS log which contained this ket string

gpg: decryption failed: No secret key

I then realised that I was being an idiot and that the service account that I was running the SQL Agent under did not have the certificate registered under that userid. I had only imported the certificate into Kleopatra for the development userid I logged in with and not for the service account. I simply imported the certificate to the service account profile and then everything worked. This meant that the original instructions and code were valid, but I thought I’d put this post up in case anyone did the same stupid thing as me. It’s worth remembering that the certificates are by default imported at a user level into Kleopatra.

SSIS Task to decrypt PGP files automatically

1 Comment

Update 2012-03-13 – If you still get error code 2 after running this code in production (but it continues to work in your development environment) you might like to look at the subsequent post I did about further troubleshooting of this issue

This is something that I spent a few hours on recently which I wanted to share. The requirement here is to create a SSIS task to automatically decrypt a variable number of files that have been encrypted with PGP. This task will live within a larger SSIS package which does other typical SSIS tasks; fetching files from FTP, moving them around a file system, streaming them into a database and so forth.

The key here is that the task needs to be completely automated so that no user interaction is required , i.e. typing in the passphrase or other such matters. Whilst working this out I was browsing around the web and found various solutions but none was 100% perfect for my particular requirements. Initially all the options I tried either required me to enter the passphrase or returned error codes even on success. This post assumes a certain familiarity with SSIS development and PGP.

The PGP tool I used was the latest GPG4WIN installed to the default location (this means that the actual executable is:

C:\Program Files (x86)\GNU\GnuPG\gpg2.exe

The PGP files I was receiving were encrypted with the public key I had passed to the external source, and were simply decrypted using the GUI or the command line if I was prepared to type in the passphrase.

The way I automated this in SSIS was as follows:

Create a Foreach Loop to allow the processing of multiple files. The collection properties looked like this:

Foreach loop collection

The variable mapping look like this

foreach loop variable mappings

Inside this Foreach Loop I create an Execute Process Task.The process properties look like this:

Execute process task

The Expressions properties look like this.

Execute process expressions

It’s important to note that the arguments property on the process page are set by the expression, not hard coded, although they subsequently appear here. It’s critical to form the arguments in the expression builder to get them to work properly. The expression in text format is:

“–batch –passphrase-fd 0 –decrypt-files ” + @[User::receive_file_name]

Part of this syntax is undocumented in the GPG help files and had to be picked from the web. The confusion that I had was that I found an article which used gpg.exe and not gpg2.exe and mine version seemed to behave differently. The passphrase here is held in a variable in the package and then passed to the command line as the StandardInputVariable. This is what the [-fd 0] string achieves in the syntax. However, this still doesn’t work properly unless you pass the –batch parameter. If you don’t pass –batch then you still get challenged for the passphrase. If you run the package in debug mode you get the dialog box challenge, which you can then type into, but if you run in production mode, the task just fails with error code 2.

Whilst looking at this problem I also experimented with storing the passphrase in a file and various other options. Although the above option worked for me, I also noted a custom component is available for purchase at http://www.cozyroc.com/ssis/pgp-task which might be wort investigating if you have a budget to acquire the tools and an enterprise grade ETL requirement.

Graham

Step by Step procedure for rolling upgrades to database mirroring for SQL Server 2008

No Comments

Here’s something I was up to during the week. It’s a fairly mundane task, but despite that one which is very important, both to the client I was doing it for, and for most people generally who run this type of configuration.

The databases I was working with were part of a true 24×7 configuration where any downtime whatsoever has the potential to lose data (email marketing where every remote click event is tracked, meaning that you can’t even take planned downtime as you can’t control email receipts opening and clicking emails).

The systems in question run a fairly standard database mirroring configuration, 2 physical servers (principal and mirror partners), mirroring multiple databases in high safety mode with auto failover determined by the quorum of a third witness server. The task in question was to run windows update on the 2 partner servers and then apply SP3 for SQL 2008 to bring it up to build 5500.

The guys who had been running these servers previously told me that normally they just patched the mirror partner before failing over and then patching the new mirror (which was previously the principal). This is the standard methodology of a rolling upgrade within a mirroring configuration, but it missed one important step. I’m incredibly risk averse in all situations, and in this scenario it’s essential to remove the mirror witness before starting this process as if you don’t you have the small potential risk that half way through the upgrade and patching process you might suddenly find your mirror partnership failing over.

In all fairness this is a quite unlikely scenario, as it would require a failure at the principal at the exact point in time that patch process was running. It was also require a theoretical problem with all the servers managing their quorum, as they ought to still deal with such a failure properly, but after many years in the business and particularly after many years within Microsoft support, I’ve had the unfortunate experience of experiencing a wide range of very obscure failures across the SQL Server product set, and a mirroring split brain is one of them.

A split brain can very simply be described as a scenario where both partners believe that they are either the principal or the mirror, therefore invalidating the partnership. If you ever get in this scenario it’s extremely horrible and sometimes (again speaking from experience) you are obliged to do some rather dirty troubleshooting to recover the situation.

Sometimes my experiences at Microsoft support can scare people and skew their view of the product, as all we ever used to deal with in the escalation team was obscure problems or bugs that didn’t normally occur and couldn’t easily be solved. This means that whenever someone asks me about a certain procedure I’ve normally seen it break in a really horrible way! 99.9% of the time in production scenarios this doesn’t happen of course, but the moral of this story is that it makes me very risk averse.

So back to the point in hand, if you want to be risk averse when patching mirror partnerships, the thing to do first is to remove the witness and thereby drop back to high safety WITHOUT auto failover, meaning that if something stupid happens whilst patching, the mirroring won’t try to failover and mess things up further.

To achieve this process in a controlled fashion, here are my step by step instructions (remember if you mirror multiple database you need to run the scripts for each DB)

1. Disable witness

ALTER DATABASE [your database] SET WITNESS OFF
go

2. Patch current mirror
3. Reboot current mirror if necessary
4. Failover from principal to newly patching mirror

--RUN on THE COMMANDS ON THE CURRENT PRINCIPAL TO MOVE TO OTHER SERVER
ALTER DATABASE [your database] SET PARTNER FAILOVER

5. there will be a short outage whilst the new principal comes online
6. patch the current mirror (original principal)
7. reboot current mirror
8. fail back to original principal server (if required – this is optional)
9. add back the witness

ALTER DATABASE [your databse] SET WITNESS = 'TCP://[your FQDN]:[your port]'
go

10. you’re good to go

Good luck with your patching!

Tracing select statements on specific objects in SQL Server without using Profiler

No Comments

I published a blog post with Basefarm last week about tracing select statements in SQL Server without using profiler. It demonstrates how to use SQL Audit (based on extended events) to record any select statements issues on particular tables over a prolonged time period in a scalable lightweight way. Although SQL Audit was designed as it name suggests, to provide an audit framework for the product, I find it can be quite useful in other ways as well. In this example we were helping developers effectively check their code base for unexplained access to a particular object.

You can read the full post here:

http://blog.basefarm.com/blog/2012/02/13/tracing-select-statements-on-specific-objects-in-sql-server-without-using-profiler/

Viking boat in Stockholm

No Comments

There’s nothing particularly special about this photo apart from to say that i find it very evocative of my time in Stockholm. Quite what the Viking boat is doing I was never sure, but it’s just that it’s there that makes me think of Stockholm generally in that there always seemed to be something like this happening in and around the town. Not always Viking boats, although myself and Matilda did ride in another one up at Sigtuna, but just always interesting and varied stuff to see and do. it’s not that London hasn’t got things to see and do, it’s just that it’s at a different pace and a different style. I just love the fact that the water and boats were always so accessible. Sometimes me and Tilda would just jump on a bus or a train and go ride the boats from one place to another, just to hang and chill and see the views. Sweet memories indeed.

Looking south east from Åre high zone

No Comments

A beautiful sunny day near the top of the cable car in Åre, this is a surprisingly good photo considering that it was taken with a mobile phone, a winmobile 6 one at that! Those were the days eh, stylus in hand…..thank god for the HTC sense which made it almost useable, although it did hang quite alot anyway! that said though it still remains one of my favourite phones as I just loved the form factor. It was an HTC Diamond 2.

Looking north over Amberley from the South Downs Way

No Comments

This is one of my favourite photos of all time, this is my default screen saver on all my machines, if you ever employ me or have me around your office, you’ll notice this on all of my devices. (when I haven’t got SSMS or notepadd++ open that it is)

It’s taken from the south downs way looking north over Amberley on a fine September day when summer was seeming to last forever. I’m always looking for property in this area, but it doesn’t come up very often. Feel free to ping me if you have a big family house for rent in this area!

Coming back from Grinda

No Comments

Over the past few years I’ve published content in all sorts of places across the web. I’ve finally managed to find a few hours to set up my domains properly, play with a bit of wordpress and try to centralise all my content going forward. This site is the result of that work and I hope it’s content will be of use to you. Since it’s also a personal vanity project at the same time it’s got some photos, musing and other general things as well. This is why the first few posts are some nice photos, and also because they’re easier to put up quickly as opposed to writing proper technical content which will follow in the coming weeks and months!

Coming back from Grinda, looking out the back of the boat, and there’s another right behind us.