Fog Creek Software
Discussion Board

Dynamic SQL Queries

I want to hear about ways that people talk to SQL Server from a web application when using a great number of distinct queries over the same set of data. Situations where you're selecting data from a table, and sometimes you join that table with one or more other tables, or you filter by a varying set of columns in the WHERE clause, or you order by a varying sequence of columns, etc. -- all of which resuls in a vast number of unique queries.

The two ways I've done this are:

1. Build the SQL query text dynamically in the web script, usually in an entity class that separates the presentation layer from the data layer. For example, create an object of class Users, indicate the joins, the filters, the ordering, etc., via object methods, and then use the object to execute the query and return a data structure to the presentation layer. The SQL string is all built dynamically within the object -- and clearly the number of possible SQL string permutations can be quite large.

2. Use stored procedures which contain relatively static queries. Make each SP as flexible as possible, but generally wind up using multiple stored procedures that do very similar things.

Stored procedures are the established best practice in this scenario (compiled, faster, keeps your data access logic out of your business logic, etc.), but I've never been able to achieve the flexibility of option 1 with stored procedures in option 2. Is it that I'm just not proficient enough with stored procedures? If so, what am I missing? Is there ever a good time not to use a stored procedure? Is there some kind of mixed approach of the above two options that works best?

DB Troubled
Friday, November 7, 2003

go with stored procedures:

- you will have better sql knowledge
- prevent sql injection
- improve performance (not on your mom's site of course)

Friday, November 7, 2003

You could use

EXEC ('sql string built with template and prameters')

this also alows you to do DDL stuff in stored procedures, if necessary/desireable.

The downside is that you the T-SQL parser will evaluate your statement at run time instead of compile time, which is normally a 'Bad Thing' (TM).

So you have to weigh the benefit provided by using dynamic SQL againts the cost of delaying and complicating error detection.

if you use dynamic sql, you might want to consider using some kind of home grown assert mechanism to verify input params before executng the sql string.

hope it helps

Friday, November 7, 2003

Looks like its time to post a link to Erland's page again:

You can make a stored procedure that accepts two dozen parameters and uses an arbitrary six of them to generate a new SQL statement.  If you use sp_executesql to execute this statement you will be safe from SQL injection, and can potentially get query plan reuse too.  (Getting query plan with sp_executesql reuse requires fully qualified table names, though, see BOL.)

Matt Conrad
Friday, November 7, 2003

I use the trick in the example below to incorporate a variable number of filter parameters.

(@Name IS NULL OR Name = @Name)
AND (@Email IS NULL OR Email = @Email)

Assuming @Name and @Email are parameters of the query, you can path both, either or neither of them and still get results.  If you pass email but not name, @Name IS NULL evaluates to True and ignores the Name=@Name piece.

This does have performance issues for tables with millions of rows in them, but the trade-off of performance to maintainability may be worth it.

Sorting is another story, especially when the fields you sort by could be of varying data types.

Wade Winningham
Friday, November 7, 2003

We use the non-stored procedure approach.

-SQL Server optimizes frequently used queries automatically. We've never found the "speed" improvements of stored procedures *that* significant.
-Stored procedures are not easily portable.  All of our code currently works on multiple platforms and databases, without code changes.
-Building the SQL dynamically in a template provides much more flexibility, and can result in simpler / faster queries. There are things you can handle at the SQL level (eg the example about @NAME being NULL or not), but you can often dramatically simplify your SQL if you do the logic in advance based on whether you have those parameters or not.
-We've also found that a lot of logic for the dynamic SQL for what we are doing is business logic anyway.  So the argument to keep business logic and data access logic separate is a bit of a red herring.
-If you are working on a web application and you realize that your SQL has to change (say to return a few additional fields), it's much easier to open a template containing SQL and make those changes than it is to log into the database and update your stored procedures.

My 2c

Friday, November 7, 2003

Agree with everything Phibian said.

The performance gains from having a SP compiled are negligible with modern sql servers and hardware.

Friday, November 7, 2003

Fire up Profilier and check for yourself.  That is the only true way to tell.  Even stored procedures can cause recompiles depending how you write your code.  But, isn't that the *one* true answer in coding... depends?

In my opinion if you are using enterprise level DBs stick with stored procedures.  If portability is really an issue it is easier to change a sp. then recompiling an app. that has sql queries hard coded.  And sp can be safer and more secure.

my 1.5 cents
Friday, November 7, 2003

Would just like to add two things on that:

1. SP's form the core data access layer in any n-tier architecture. Its secure and efficient by that means. Highly recommended by the very designers of the MS SQL Engine.

2. SP's have the execution plan precompiled, so for complex queries and complicated logic it most certainly performs better than "raw TSQL". The performance boost is negligible for simple SELECT statements.

Friday, November 7, 2003

Can anybody point to any documentation showing that SP's are indeed more secure? Yes, they lend themselves to isolation of functionality from a Server login and role perspective, but I have rarely ran into a case where I let individual users do their thing via SQL. So, is the real security benefit?

Since SQL Server 2K, as Phibian points out, common SQL execution plans are parameterized, query plans are generated and saved just like for SP’s. This gives you all the same benefits of an SP, granted not with the initial queries. (Reference Pg. 79 Inside MS SQL Server by Kalen Delaney)

I prefer a well thought out DA layer in the BL code so I don’t have to hunt and peck all around the architecture for enhancements and bugs.

Friday, November 7, 2003

I use a series of classes (Select, Insert, Update, Delete) that completely encapsulate building queries.  This prevents SQL injection just as well as stored procedures and is far easier than messing around with string concatination all the time. 

Pseudo code:

query = new Query_Select
query.Field('AVG(Total)', 'AvgTotal');
query.Join('Invoices.CustomerID', 'Customers.CustomerID');
query.Where('Customer.Name', 'Bob Smith');
    query.Where('Invoices.ID', 5, '!=');
    query.OrWhere('Invoices.ID, 6, '<');
    query.Where('AvgTotal', 200, '>');
resultset = query.execute();

It's a pretty flexible design.  And I include a lot of high-level where-like functions for handling a range of situations.

Couple of advantages:
* Encapsulates the SQL language for portability among different SQL platforms.
* Queries can be passed around and manipulated.

The last advantage is pretty cool.  I have functions which apply filters to any existing query.  I just pass the query object by-reference to the function.

Almost Anonymous
Friday, November 7, 2003

There is no one right answer, and for all the gains there are plenty of losses. However the procedure approach is valuable for the following reasons:

-The account context used to access the database only needs to be given rights to call the stored procedures you identify as the entry-points -- Even if someone ownzorz your web box, or a malicious employee put a back door in a component, or you leave a gaping hole injection attack opportunity (which is ridiculously common), in this scenario they can do nothing more than execute the entry stored procs. Compare this to the app needing access to the tables, etc....SELECT * FROM cust_credit_card. Look up "ownership chaining" in the docs to understand this.

-Stored procs are like a first layer of data abstraction objects - If you constrain all data requests and insertions/updates to them, with no client access to the schema, you can do wholesale changes with no visiblity to the users of the db.

Dennis Forbes
Friday, November 7, 2003

How about performing the dynamic processing on the client?  The simplicity of WHERE and ORDER BY can make it easy to overlook this idea but nothing prevents you from doing the filtering and sorting in the client code. 

Friday, November 7, 2003

*  Recent Topics

*  Fog Creek Home