Category Archives: SQL Server

Parameterise a IN clause in T-SQL

Parameters are a great feature in any SQL implementation. You can protect yourself – and more likely – your clients from SQL injection by always using parameterised queries for every call to the database.

I’ve seen some implementations in e-commerce packages where SQL injection would be a breeze.

The Scenario

For a recent client I had a .Net app which was to query records but exclude a set from a specific set of criteria. Normally this is either done with a join or an IN clause. In this instance as I am passing the criteria by parameter I needed a IN Clause.

A IN clause is used to either include or exclude records from a result. An example would be:

Select * from TableA where TableA.name IN ('John','Frank','Dave')

This would include only those rows where column name matched one of ‘John’ or ‘Frank’ or ‘Dave’.

The Problem

Now there is no current way to send a array of items through a parameter. So your choices are:

  • Invoke the query once for each criteria item, or
  • Use dynamic SQL

Both of these are highly inefficient, to say the least.

The Solution

Now before you start pulling your hair out, there is a simple and elegant solution. In T-SQL you can use XML, yes I did write XML that is not a typo.

Here’s how

In T-SQL you can have script which looks like:

Declare @x xml
@x = '<L>
        <I>John</I>
        <I>Frank</I>
        <I>Dave</I>
     </L>'

Select * from TableA where TableA.name in (
     Select * from ( select T.c.value('.','uniqueidentifier') as source
     from @x.nodes ('/L/I') T(c) ) Idss )

The inner query converts the XML into a result set by breaking out the individual <I> nodes within the <L> node into logical rows. The ‘T.c.value’ then accesses the logical row and extracts the text within. Finally this is presented as a subquery result named Idss (apparently that part is important otherwise you’ll get an error).

Now all you would need to do is to place your XML in the @x parameter. I chose a parameter type of string as that was simpler to handle.

Conclusion

The XML solution is quite elegant requiring minimal effort on the client side to pass the parameters to the server. This is much faster than dynamic SQL, as the server store the plan away and will save you recompiles on the server – always a performance killer.

You’ve seen how simple it is to specific a list of values for an IN clause as a parameter. A lot simpler than either of thought!

The future wave of SQL development?

Last month Microsoft released the SQL Azure management tool currently code named project Houston. You can read more about it here. Houston is a Silverlight tool which provides a GUI to managed and create items within your SQL Azure database instance. I would encourage everyone to watch the demos and get hooked.

Houston will allow those developers not familiar with SQL Server to build tables, views and stored procedures quickly as part of their web project. Though if someone can code a stored procedure they wouldn’t really be a simple developer would they?

It is, in a sense, a kind of Access-ish type of interface to SQL Azure. SQL Azure really does allow a small operation like us to build robust applications for enterprises, hosted in the cloud and managed by us. No need for us to rent expensive servers and setup all the software on our own or, eek!, build our own with the expense and SLA requirements. The cloud commoditises all that and lets us develop and someone else manage. But with the additional flexibility that a customer can turn it off when not needed. So you only pay for the CPU and storage you are using.

This month I hope to attend several more seminars on SQL Server. And at the end of September there is SQLbits. For anyone who hasn’t been or heard of SQLBits, this is a community conference which has now expanded beyond its original Saturday format. Check out their website and register. This time they are in York.

Synchronizing data from SQL Server

In this instalment I shall speak about a scenario which I solved for a retail client of mine.

The client’s system ran on SQL Server 2005 with an Access 2000 front end, which was used by the product staff, the buyers and the merchandisers, to create and manipulate products. These creations and changes then needed to be synchronised with the warehouse fullfullment system which tracks these products and creates the shipping and packing documents when they are sent out. The warehouse system did supply a odbc connection driver, but it was read only.

We had to think a bit because obviously we would not want each and every time a user went on the new product screen to engage in synchronisation. The server would be engaging in synchronisation on a constant basis. As well when a product is created, sometimes only the SKU and other information are known but other information such as barcode and packsize and price aren’t. So there needed to be a way to ‘bookmark’ a product before sending it to the other system.

Our solution was to create a two stage product creation. In the first stage the user creates the product and obtains a SKU. Once all the information is available then the user can submit the product to the warehouse system and propagate the data to all the stores and tills. We accomplished this using a simple queuing table and DTS packages.

When a user submits a product it will get noted in a general queuing table. Every 15 minutes a DTS package would run and create a special delimited file which was then ftp’d to a special folder on the warehouse system. That system then had a daemon (it was based on Linux) which then picked up the file and made the appropriate additions and changes. The queuing table would then get run down and each product would be inserted into each store ready to track the stock and into a table which would be used to build the nightly update file for the tills.

We found the usage to these tables and DTS packages a superb and simple way to queue up data and free the user’s system immediately. Errors could be collected and sent to a administrator user who could then rectify any problems.

Its altogether LDAP

This past week I had two signifiant events. Well, three really. First I one a Innovation Voucher which will hopefully allow some research into a new product offering to occur. I say “hopefully” because so far all the NWDA has done so far is to ensure that my application meets some basic requirements. Then it will be up to some suppliers to show interest and see what happens from there.

Then I upgraded one my Linux Ubuntu machines to the latest Ubuntu. This took quite some time! The machine is quite a slow machine – its a Celeron 667Mhz. But it plays a critical role in the network as it runs DHCP, DNS, Backup, NAS, Samba and WINS. Overall the upgrade went through clean, but Ubuntu needs to ask a few more questions upfront – especially relating to config files. I would walk away from the machine, the display would turn off and then when I came back I was aprehensive to hitting a key lest I answer a critical prompt incorrectly. Maybe there is a way to keep the display on all the time. Another issue I ran into is the GDM login. I had turned this off. So users logging in at the console would login to text and then issue ‘startx’ if they wanted XWindows. The upgrade ignored that setting and reset the GDM login. I removed GDM from the startup but that still didn’t help. So I just removed all the Xstuff. And then I have to tell GRUB2 to go to a text screen.

Once all that is done and you’ve upgrade GRUB fully to GRUB2 the boot time is minimal. It really is fast!

The next event was changing ly local domain. I had a local domain which ended in .local. local is a public domain which, quite often would mess up my VPN users. And in certain versions of Linux, some utilities like ping will not work properly with a .local domain. So I decided to change it – to .localdom. My what a process. I had to change LDAP and all its config files and such. Surprisingly this was the easy part. Then all the ldap.conf files in all the Linux machines and the samba setup on the PDC, DNS settings (of course), my Exim setting – more on this below, Backuppc setting for e-mail domain.

Most things worked, except for the PDC. I determined through the logs that this was because the bind password was not reset. Once that was done things worked. That exposed a configuration issue with the way the NetBIOS browsing was being done. Apparently it is an absolute must that the PDC be the master browser. So back into smb.conf to correct. But nothing I did fixed the problem. Eventually I gave up and rebooted the machine. Things worked! Sigh. All that hassle and it was just a reboot needed.

Then, however a couple of days later I got complaints about e-mails bouncing. Upon enquiry with the receivers it seems the local domain name was being left in outgoing e-mails. The problem? The ldap query inside the exim config file which does that was not updated for the new base DN.

This week I’ve been looking into SQL Azure. This is Microsoft’s cloud version of SQL Server. Its quite basic to start but with Microsoft the best is usually yet to come. I’m sure this will be a very popular service. One that I may use in the future. I’ll be doing some of my data mining testing on it. Oracle does offer Oracle in the cloud, but its through Amazon ECC.