Monday, November 09, 2009

SQL Data Scripts

A standard practice in SQL Server development is to make any changes to objects or data using a script and apply it to the DEV system; once verified, it is applied to QA; once verified, it is finally applied to production.

You can take this one step further, and write your scripts to be deterministic - i.e. if the script is run multiple times, the outcome will be the same.

For example, let's say you have a lookup table tblSettings like this:

ID int IDENTITY (1,1),
ItemText nvarchar(50)

the table holds two records with ID's of 1 and 2, and you wish to add a third. You could simply write

INSERT tblSettings ( ItemText ) SELECT 'application author'

but if you execute that script more than one time, you get multiple records with that same text.

Now, this is a back-end table, and you have verified that the next ID will be 3, using this:

SELECT IDENT_CURRENT('tblSettings') + IDENT_INCR('tblSettings')

The solution is to test if that record exists before you insert the record, like this:

IF (SELECT 1 FROM tblSettings WHERE ID = 3 ) IS NULL
INSERT tblSettings ( ItemText ) SELECT 'application author'

in this way, the script can be run multiple times in DEV, QA or even production, and the outcome will always be that the table holds exactly those three records.

Now, let's say that you need to change the record just added to say 'application developer'. You could simply write

UPDATE tblSettings
SET ItemText = 'application developer'

but why "update" a record to the same value? The solution here is simply to include the old value in the criteria, like this:

UPDATE tblSettings
SET ItemText = 'application developer'
AND ItemText = 'application author'

The benefit of these methods is to touch the data only where necessary, and to ensure that the outcome is precisely what is expected.

Wednesday, November 04, 2009

Reusable code

My definition of reusable code: modular code that is flexible enough that it can be used in more than one way.

For example, let's say that you have a table Orders like this:

ID int identity,
OrderDate datetime,
Quantity int

and you want the total orders for one year. Fine. All you need is something like this:

@Year int
SELECT SUM(Quantity) AS Total
FROM Orders
YEAR(OrderDate) = @Year

which answers the immediate need.

But if you write it like this, you can execute the sp without a parameter & get the current year:

@Year int = 0
SELECT SUM(Quantity) AS Total
FROM Orders
YEAR(OrderDate) =
WHEN @Year = 0

Another approach is to add a 2nd parameter for the month, like this:

@Year int = 0,
@Month int = 0
SELECT SUM(Quantity) AS Total
FROM Orders
YEAR(OrderDate) =
WHEN @Year = 0
MONTH(OrderDate) = @Month
@Month = 0

which allows you specify a single month, or use a zero to get the whole year.

Granted, these are very basic examples, but if you build in little hooks like these at the outset, you can save time later on.

Saturday, October 24, 2009

Tally tables

I first learned about tally tables from the forums at SqlServerCentral which is great resource for information about SQL Server. (Free registration is required to view the content.)

A tally table is simply a single-column table that holds integers from 1 to any arbitrary large number. For example, this code from SSC will create a tally table of numbers from 1 to 11,000:

IDENTITY(int,1,1) AS N
Master.dbo.SysColumns sc1,
Master.dbo.SysColumns sc2


The benefit of a tally table is that it can be used to implement set-based solutions, thus avoiding cursors or WHILE loops.

Example 1: Let's say that you need to generate a list of all dates from 05/12/2008 to 06/02/2008 inclusive. You might use a temp table or table variable, and construct a loop that inserts records for each date in the range. By using a tally table, all you need is this (using variables to make it flexible):

@start datetime,
@end datetime

@start = '05/12/2008',
@end = '06/02/2008'

DATEADD(d, (N - 1), @start)
N BETWEEN 1 AND DATEDIFF(d, @start, @end) + 1

Example 2: You have an Orders table like this:

ProductID int,
OrderDate datetime,
Quantity int

and you need the total orders for one product, for one year, by month. The basic query might look like this:

MONTH(o.OrderDate) AS OrderMonth,
SUM(o.Quantity) AS Total
Orders o
YEAR(o.OrderDate) = 2007
AND ProductID = 49

However, this will not return a row for any month that did not have
an order. You can use a tally table to ensure that you have
totals for all 12 months, even for months without an order:

T.N AS OrderMonth,
WHEN MONTH(o.OrderDate) = T.N
THEN o.Quantity
) AS Total
Orders o
YEAR(o.OrderDate) = 2007
AND ProductID = 49

There is one more example that I'd like to share, where you can
insert multiple rows from a single edit record, but it is too long
to post in the blog format, so I will assemble this post into
a Word document and add the URL when complete.

EDIT: The full article with examples is here.

Thursday, October 15, 2009

SQL Lookups

I am working with an Access app that was converted to use a SQL Server database, and gradually replacing Access queries with T-SQL or stored procedures for better performance.

If you have used Access for a while, you know that the so-called Domain Aggregate functions such as DCount, DLookup, and DSum are very expensive in terms of resource usage. Even if you write your own VBA function to perform the lookup, the fact is that the Jet database engine is not efficient when looking up a single value.

My challenge was that my form needed to pull data from 3 different queries. However, after binding a query to the form to get the 1st value, using DCount for the 2nd value & then DLookup for the 3rd, the form would open quite slowly. In fact, you could observe the various controls on the form being painted as the 3 values were pulled in.

The solution was to create a stored procedure to look up the values, create a pass-thru query to execute the sp, and then open a DAO recordset in the Form_Open event & grab the values from the pass-thru query.

Each of the Access queries / lookups were re-written using T-SQL. Then we declare 3 variables, use the SQL queries to get the values, and write a SELECT statement to return the values; something like this:

@value_1 int,
@value_2 int,
@value_3 int

SELECT @value_1 = (first query converted to T-SQL)

SELECT @value_2 = (second query converted to T-SQL)

SELECT @value_3 = (third query converted to T-SQL)

@value_1 AS value_1,
@value_2 AS value_2,
@value_3 AS value_3

Friday, September 25, 2009


I am working on a query that groups data by the week-ending date.

In MS-Access, you can use DatePart to get the day of the week ( 1 to 7 ) and then add 6 thru 0 days to your test date to calculate the week-ending date. For a normal Sunday - Saturday workweek, if your date field is TheDate you would use this:

DateAdd("d", 7 - DatePart("w", TheDate), TheDate)

...and if your workweek if different, e.g. Saturday - Friday you can use the optional argument to specify the 1st day of the week:

DateAdd("d", 7 - DatePart("w", TheDate, vbSaturday), TheDate)

Now, if you are working in SQL Server, the basic Sunday - Saturday expression is similar:

DATEADD(d, 7 - DATEPART(dw, TheDate), TheDate)

However, in T-SQL there is no optional parameter for the 1st day of the week - you would need to do this:


DATEADD(d, 7 - DATEPART(dw, TheDate), TheDate)
FROM TheTable

...but the DATEFIRST is a global setting, so it might affect other code that depends on the default setting.

One possible solution is this:


DATEADD(d, 6 - DATEPART(dw, TheDate), TheDate)
FROM TheTable

...which forces the 1st day to the default, and then uses 6 instead of 7 to calc the offset days. This will work in a stored procedure, but not in a view.

OTOH if you are certain that the DATEFIRST never varies from 7, you could simply use

DATEADD(d, 6 - DATEPART(dw, TheDate), TheDate)

which does give the correct result for the Saturday - Friday workweek & can be used in sp's and views.

Thursday, August 27, 2009

SSMS - INSERT script

SSMS allows you to quickly generate table scripts, such as INSERT, SELECT, and UPDATE, which can save lots of time when writing T-SQL.

However, I noticed that if you have an IDENTITY column the INSERT script will leave that out.

Granted, most of the time you won't be inserting a value into that column, and in fact you need SET IDENTITY_INSERT MyTable ON if you do wish to insert into that column. But I would prefer to see that column in the script, so I can remove it myself according to my needs.

Sunday, August 09, 2009

SQL script encoding

During development work with SQL Server, I will generally write a series of change scripts that need to be applied in a specific order. For example, one script might add a new table and another creates a stored procedure that uses that new table. Therefore, when I name the scripts I will prefix them with a sequential number e.g. 01_tblCustomer.sql, 02_prcGetCustomer.sql and so forth. This has several advantages: each script can be written & tested separately; the scripts can be applied in the correct order; the dependencies are recorded correctly in SQL Server.

Originally I would pass the group of scripts to the DBA to be applied to the production database, but as the number of scripts grew the chance for error grew as well.

I wrote a small VBA program to loop thru the script folder & build up a change script, so the DBA only has to apply a single large script. This worked quite well when each individual script was started from a blank query window, but when I generated a Modify script using SSMS there were extra characters at the beginning of the SSMS-generated script; specifically there was an ASCII 255, 254 at the beginning of the first line.

It turns out that when you start a script from a new query window & save it, the default encoding is

Western European (Windows) - Codepage 1252

which corresponds to ANSI or plain text. However, when you use SSMS to generate a script, the default encoding is

Unicode - Codepage 1200

which is the reason that VBA was reading the extra characters from the top of the script file.

So then, when saving the SSMS-generated scripts, all I had to do was to use the Save button option Save with Encoding... and select the 1252 encoding.

Sunday, July 19, 2009

Pass-Thru Solutions

Six months ago we converted an Access 2003 application to use MSSQL 2005 as the database, and are gradually converting reports to use pass-thru queries, with either T-SQL or stored procedures, to take advantage of the speed & power of the SQL Server.

There are many complex reports, some of which have embedded summary subreports, and therein lies the snag - Access does not allow you to use a stored procedure for subreports.

The main reports that have user criteria are easy - create a stored procedure that accepts parameter(s) and the pass-thru query is re-written in the button click event. So the VBA code for that might look like this:

CurrentDB.QueryDefs("MyQuery").SQL =
"EXEC dbo.prcMyQuery '" & Forms!MyForm!MyDate & "' "

and the query executes this:

EXEC dbo.prcMyProcedure '07/19/2009'

thus returning the desired records.

But what to do about the subreport? That query is often far more complex than the main query, since it provides a summary with aggregates. . .

The solution is to create a View, link to the view (which Access treats as an attached table) and then create a simple Access query to apply the filter criteria. So if the view is attached as


then the Access query might look like this:

SELECT viwMyView.*
WHERE viwMyView.DateField = Forms!MyForm!MyDate

which then applies the same filter to the subreport, that is applied to the main report. We still get the power of SQL Server to build up the summary data, and the number of rows is small enough that the Access query overlay does not impact the performance.

Saturday, June 27, 2009


When I work with stored procedures, I will always test if the procedure exists, drop if it does exist, and then create the stored procedure. This produces nice clean code which can be saved to source control & run at any time as needed. Note that the create code includes the grant permissions.

I had been using "the long version" of the IF EXISTS statement, for example:

SELECT * FROM sys.sysobjects
WHERE [name] = 'TESTME' AND [type] = N'P'

This does work just fine, but MSSQL does not allow more than one schema-scoped object (table, view, sp, udf) with the same name. So, testing for the type is not necessary. So we could use just this:

SELECT * FROM sys.sysobjects
WHERE [name] = 'TESTME'

However, there is a unique ID for every object & a built-in function OBJECT_ID() to get that ID, so this is all we need to test if an object exists:


Saturday, June 20, 2009

External Change Tracking

This article relates to my previous post that describes a change-tracking system for MS-Access. You can view the complete code with comments here: Change Tracking in Access

In a very active database, the tracking table can become quite large, so here are two methods to move the tracking table to a separate file.

Remember - always make a backup copy before making significant changes to your database.

Method 1

Create a new blank database.
Import the tracking table from the original file.
Delete the tracking table from the original file.
In the original file, create a link to the table in the new file.

...all of the above can be done using the Access GUI.

Method 2

This module will create a new file in the same folder as the original file, move the tracking table and link to it. You would run this only one time...

Sub External_Change_Tracker()
Const MyTable = "tbl__ChangeTracker"
Dim dst As DAO.Database
Dim MyNewFile As String
Dim src As DAO.Database
Dim tdf As DAO.TableDef
On Error GoTo err_sub
' create new db
MyNewFile = CurrentProject.Path & "\Change_Tracker.mdb"
Set dst = CreateDatabase(MyNewFile, dbLangGeneral)
' copy the tracking table
DoCmd.CopyObject dst.Name, MyTable, acTable, MyTable
' drop local tracking table
DoCmd.DeleteObject acTable, MyTable
' link to table in external file
Set src = CurrentDb
Set tdf = src.CreateTableDef(MyTable)
With tdf
.Connect = ";DATABASE=" & dst.Name
.SourceTableName = MyTable
End With
src.TableDefs.Append tdf
Exit Sub
MsgBox Err.Description, vbCritical, "Error # " & Err.Number
Resume exit_sub
End Sub

You can view the complete code here: External_Change_Tracker

Whether you use method 1 or 2, you will still need to perform a Compact & Repair to reclaim the space used by the deleted table.

Sunday, June 14, 2009

Abort SQL batch

I am working in an environment that has three databases:

database ABC on server ABC - the production system

database ABC_QA on server ABC2 - for QA testing

database ABC_DEV on server ABC3 - for development

The developer writes scripts to create / alter objects & manipulate data during development on ABC_DEV, and then sends them to the DBA to run against ABC_QA for QA testing. When QA is complete, the developer prepares a final script for the DBA to run against ABC. In both cases, the "script" file is a compilation of multiple individual scripts, each of which ends with a GO statement.

A problem arose when the DBA ran an intermediate QA script against production - the script does not include a USE statement; but even if it did, and that failed, the script would continue to run, and any create object statements would execute against whatever the current database connection was (usually master). Not good.

I did some research on RAISERROR and found that a severity level of 20 of higher will actually close the database connection, and thus abort the entire script, preventing any extra objects from being created. Adding the following to the beginning of the script takes care of this:

FROM sys.sysdatabases
WHERE [name] = 'ABC')

which yields the following message to the DBA:

Msg 2745, Level 16, State 2, Line 5
Process ID 57 has raised user error 50000, severity 21. SQL Server is terminating this process.
Msg 50000, Level 21, State 1, Line 5
Msg 0, Level 20, State 0, Line 0
A severe error occurred on the current command. The results, if any, should be discarded.

...and the rest of the script is aborted without any further processing.

Sunday, May 10, 2009

SQL templates

SSMS has a very handy feature in the Template Explorer (Ctrl-Alt-T) that contains 100's of templates for all sorts of scripting tasks you might require:
  • Open the Template Explorer
  • Drill-down to the category
  • Right-click on the desired task and click Open.
  • Press Ctrl-Shift-M to display a popup screen & enter your parameters
  • Click OK
You can also create custom templates:
  • Right-click on the top node, select New ~ Folder and give it a name e.g. Custom
  • Right-click on the Custom folder, select New ~ Template and give it a name e.g. MyTemplate
  • Right-click on MyTemplate, select Edit
  • Observe the Connect dialog & connect to the your server
  • Type in the template code and click Save
. . .but when you do that, the default location is something like

C:\Documents and Settings\Owner\Application Data\Microsoft\Microsoft SQL Server\90\Tools\Shell\Templates\Sql\Custom

so you will probably want to use Save As... and store your custom templates in a subfolder closer to where you keep your regular scripts & apply source control. NOTE: Your template file should also be set to read-only, because when you do the Ctrl-Shift-M the displayed file name does not change, so you want to protect your templates.

One basic example of using a template is to add a new column that has a default value.

Here is a sample hard-coded script:

ALTER TABLE dbo.Company
CONSTRAINT DF__Company__Flag

You could of course create a simple template where you just search / replace ttt = table name, ccc = column name, etc. but it gets very cumbersome:

ADD ccc ddd NOT NULL

This is a template that works with SSMS:

important note - blogger does not display less-than or greater-than symbols, so in this example you need to replace the { with the less-than symbol, and replace the } with the greater-than symbol

ALTER TABLE dbo.{table_name, sysname, table_name}
ADD {field_name, sysname, field_name} {data_type, nvarchar(128), data_type} {nullable, nvarchar, NOT NULL}
CONSTRAINT DF__{table_name, sysname, table_name}__{field_name, sysname, field_name}
DEFAULT ({default_value, nvarchar, default_value})

Within each set of brackets, the 3 arguments are:
  • title of the parameter that is displayed in the popup screen
  • the data type of the parameter that is to be inserted
  • the default value of the parameter that is to be inserted
I have posted a few templates as Word documents with the full code, screen shots & more examples, you can view them here:




Saturday, May 02, 2009

Remote Queries in Access

MS-Access makes it very easy to link to tables in other Access databases . . .

File ~ Get External Data ~ Link Tables...

just navigate to your mdb file & click Link, select the table(s) and click OK.

But you can also connect & retrieve data from another file, without creating a linked table. This little-known technique has been around AFAIK since the Access 2.0 days. I'm not suggesting that you use this instead of linked tables, but it has certain advantages & flexibility in specific cases.

Begin a new query, don't add any tables, and switch to the SQL view. Type in your SQL statement, which might look like this:

FROM Table1
IN 'C:\MyFile.mdb'

Note the single quotes around the file name. When you run this query, you'll see the data in that external table. If you want to apply some criteria, you could use this:

FROM Table1
IN 'C:\MyFile.mdb'
WHERE MyField = "something"

One advantage of this technique is that it bypasses the show / hide settings for hidden & system objects ( Tools ~ Options ~ View ~ Hidden objects, and ~ System objects ).

Let's say you want to view all the user tables in an external file - you could use this:

SELECT [name]
FROM MSysObjects
IN 'C:\MyFile.mdb'
WHERE [type] = 1
AND LEFT([name],4) <> "MSys"

MSysObjects is a system table that contains all the objects in the Access mdb. The type = 1 denotes local (not linked) native Access tables.

Taking this a step farther (with a little bit of VBA code) you could view a list of the fields in that external table (watch for line breaks) . . .

Dim db As DAO.Database
Dim qdf As DAO.QueryDef
Dim fld As DAO.Field
Dim sql As String
Set db = CurrentDb
sql = "SELECT * FROM Table1 IN 'C:\MyFile.mdb' "
Set qdf = db.CreateQueryDef("", sql)
For Each fld In qdf.Fields
Debug.Print fld.Name
Next fld

Sunday, April 26, 2009

Change Management

Managing change in SQL Server applications requires a process that protects data integrity & maximizes production uptime, while providing flexibility for new development. I would propose the following as a minimum configuration to accomplish these goals.

Create 3 database environments -- one for production, one for development, one for QA. Only the DBA has direct access to production or QA. The developer has direct access to only the development db. These 3 db's can sit on separate servers, or alternately the development and QA db's can run on named instances on the same server if size & activity is modest.

Create 3 application front-ends -- one for each database; containing some code, config file or other means to connect the front-end to the correct db; having some method to prominently display the connected server & database when it is not connected to the production db. (The front-end is really a single design, it is only the connection string that varies between the 3 copies.)

To begin, the DBA takes a backup from production & restores it to both the development & QA servers.

The developer connects the DEV front-end to the DEV db, writes T-SQL scripts to make any changes to the design, and updates the front-end for those changes. It is possible to use the GUI to generate the change script, but the GUI is never used to actually apply those changes - that is accomplished with the script. The developer also prepares documentation for the desired change in the front-end.

At this point, the developer may decide to share the DEV copy with select stakeholders, especially if the change is complex, for feedback & verification that the change meets the requirements. This step is commonly referred to as user acceptance testing.

When the developer verifies the change in that environment, they pass the change script to the DBA who reviews the script, and then applies it to the QA db. The developer will then connect the QA front-end to the QA db and verify the change.

The QA person will then evaluate & verify the change using the documentation provided by the developer. Upon their sign-off, the app is ready for release.

The developer notifies all users that the database update will commence.

The DBA will backup all 3 db's and then apply the script to production. Following that, the developer will connect the production front-end to the production db & verify. The developer publishes the new front-end & notifies the users that the update is complete.

The DBA takes a fresh production backup & restores to both DEV and QA, and the developer reconnects DEV to DEV and QA to QA. The DBA may also pass a copy of the backup file to the developer, if the 4th environment is used as follows.

A 4th possible environment is for the developer to maintain a copy of the DEV db on their PC, using e.g. the MSSQL Developer Edition along with a 4th copy of the front-end. This provides ultimate flexibility of design; the developer restores the backup to that system & can use that as a test-bed for feasibility studies of design changes without impacting the true DEV environment.

Throughout all the above processes, source control is used to maintain matching copies of the front-end & database scripts to allow rolling back changes that have unintended consequences.

Saturday, April 11, 2009

SQL scripts without sp_executesql

There is a totally useless scripting option in SSMS 2005 that you think might mimic the behavior of EM 2000 but it does quite the contrary.

Using Enterprise Manager in MSSQL 2000 you could generate a script to DROP and / or CREATE an object. The DROP statement used the "if exists" syntax, so you could replace an object with a new version. If you wanted to change an object (except for tables) you generate just the CREATE and then change that one word to ALTER. Simple stuff.

However, in SSMS there is an option "Include IF NOT EXISTS clause" which does several totally unnecessary things:

First of all, you will see

EXEC dbo.sp_executesql @statement = N'
all your statements are here
If you choose Create, it tests IF NOT EXISTS followed by CREATE, but you just scripted an object that exists, so it does nothing.

If you choose Modify, it tests IF NOT EXISTS followed by ALTER, but how can you alter an object that does not exist?

What a mess.

If you want to generate a script like 2000, then you need to turn that option OFF, and then separately generate the scripts for the DROP and the CREATE, and put them together.

Saturday, April 04, 2009

SQL - list columns in a table

I have gotten into the habit of using either sp_help or sp_columns to view a list of column names, for example

sp_help 'MyTable'


sp_columns 'MyTable'

which are much faster than using the Object Explorer in SSMS to drill-down thru the database, table and finally the columns. The downside of these sp's is that they display much more info than usually needed; I just want a list of columns & data types etc.

I created a new stored procedure wm_columns, click the link to view.

I began with sp_columns since it returns a single record set. Then, after looking in BOL I created a temp table to capture the output, and then execute the sp to populate the table:

@sql nvarchar(4000)


TABLE_OWNER sysname,
TABLE_NAME sysname,
COLUMN_NAME sysname,
DATA_TYPE smallint,
[TYPE_NAME] sysname,
[LENGTH] int,
SCALE smallint,
RADIX smallint,
NULLABLE smallint,
REMARKS varchar(254),
COLUMN_DEF nvarchar(4000),
SQL_DATA_TYPE smallint,
IS_NULLABLE varchar(254),
SS_DATA_TYPE tinyint

EXEC sp_columns @Table

Once that it done, it is simple to select only the columns that interest me:


. . . but it would be nice to be able to sort the list alphabetically by the field name, so after populating the temp table, I create a unique index based on an argument @SortAlpha to my stored procedure:

EDIT ON 04/09/2009: There was a bug in this part of the code, the alpha sort did not work. Fixed here & also in the download file . . .

CASE @SortAlpha
END + ')'


EXEC sp_executesql @sql

. . . which causes the ouput to be sorted according to the argument.

Wednesday, April 01, 2009

Linking Access to SQL Server

I have an Access app that links to SQL Server, and when I tried to attach a new SQL table in code I ran into a snag - the linked table would forget the user name & password, so Access would pop up one of those login dialog boxes.

I compared the value of the table Attributes (a long integer) & discovered that my new table was different. A quick trip to Access Help was truly helpful - I was missing the attribute dbAttachSavePWD so basically here is what worked (the fix is in bold):

Sub SQL_AttachOne(TableName As String)
Dim db As DAO.Database, tdf As DAO.TableDef
Const s = "the connection string goes here"
Set db = CurrentDb
Set tdf = db.CreateTableDef
With tdf
.Name = TableName
.SourceTableName = TableName
.Connect = s
.Attributes = .Attributes Or dbAttachSavePWD
End With
db.TableDefs.Append tdf
End Sub

The Or operator sets the appropriate flag for Access to save the info.

Friday, March 27, 2009

Loop thru SQL tables

"How can I loop through all the tables in a SQL Server database, and run T-SQL on each table where the name matches a pattern, without using cursors?"

This question was recently posted at a SQL forum. Here is the solution. . .

SQL Server has a system stored procedure sp_tables that displays all the tables in the database, and has an optional parameter that allows filtering the results. For example, to list all tables that have the prefix 'tbl' you would use:

sp_tables 'tbl%'

To use these results in subsequent processing, we need to save the output to a table. After looking up the sp_tables procedure in BOL, we can create a table to hold the results, and then execute the procedure to populate the table:

TABLE_OWNER sysname,
TABLE_NAME sysname,
TABLE_TYPE varchar(32),
REMARKS varchar(254)

EXEC sp_tables 'tbl%'

Now we need a strategy for looping thru these records & processing each table one-by-one - - We can add a flag to the temp table to mark when each record is processed:

EXEC sp_executesql N'ALTER TABLE #temp ADD Processed bit NOT NULL DEFAULT (0)'

Now, we select the first table name, run our T-SQL on that table, and mark the flag as being processed. The WHILE loop continues as long as there are any records that have not been processed. At the beginning of our code we need to insert the line:


and then we can write our loop:

WHILE (SELECT COUNT(*) FROM #temp WHERE Processed = 0) > 0

...which simply prints the name of each table, where the table name matched our filter.

To execute a real SQL statement against each table, we need to add another variable at the top of our code:

DECLARE @sql nvarchar(4000)

and then the loop is written like this:

WHILE (SELECT COUNT(*) FROM #temp WHERE Processed = 0) > 0
-- replace next line with your SQL statement
EXEC sp_executesql @sql

...which prints the number of rows in the table, for each table where the name matched our filter.

You can view the complete code here:


Friday, March 13, 2009

Tracking Access Button Clicks

I am working with an older Access application that has almost 500 command buttons, and our task is to determine which features are still being used & which ones can be retired. Hopefully we'll recover screen space, plus improve the maintenance by removing obsolete queries & reports.

Of course, asking the users which features they are using is one option, but with over 100 users that is not really practical.

So we created some code to track button clicks, and using the Screen object in Access makes that quite simple. Basically we created a module to save the click info to a table, and then used Word to mass-update the code modules to add a line to invoke the tracking code. More on that later.

The ClickTracker code:

First, we test if the user clicked a command button. This is necessary because you can have a click event on a list box, combo box or even the form itself. The test looks like this:

If Screen.ActiveControl.ControlType = acCommandButton Then
' ( code goes in here )
End If

Then we can get the relevant info:

The form name is

The button name is

...and the button caption is

The complete module can be viewed here. ( see * and ** below )

Part 2 - adding that line of code in 500 places...

With that many command buttons, we needed a quick way to insert that code. Easy. Select the text of the entire code module & copy / paste into Word. Then you can do a Find & Replace All, using some of the special characters available in Word:

Find this: _Click()
Replace with: _Click()^p^tClickTracker

...which will insert a new line right at the beginning of every click event to invoke the ClickTracker code. Then you just select all the text from Word & paste it back into the code module & overwrite the old version.

* our table uses an Autonumber for the first field, that why we skip rs.Fields(0). The other fields can be text(64) because that is the max length of a form or control name in Access.

** our version includes calls to custom functions 1: GetUser() calls the Windows API to get the ID of the Windows user, and 2: DateNoTime() strips off the time portion from Now() so we are saving only the date, not the time. You don't need these for the rest of it to work, but you do need to modify the code to skip those last two fields.

Access subform VBA

Forms behave differently when they are used as subforms, and this can be a challenge when using VBA. Normally, when you load a form, Access adds it to the Forms collection. So if you open a form MyForm then

Forms.Count = 1

and you can refer to a control on that form using:


However, if that form contains a subform, MySubform, then

Forms.Count = 1

and this will fail:


So if you want to refer to a control on the subform, you need to use:


Note that TheSubformControl is the name of the subform control on MyForm that contains the form MySubform, not the name of the embedded form. Nowhere do you see the actual name of the form that is being used as a subform.

Confusing? You bet. If you use the Expression Builder it can allow you build the incorrect version, so you need to make sure to drill down through the main form to the subform.

Let's say that you want to invoke code on the subform when an event occurs on the main form - for example, when you navigate to a different record on the main form, you want to set the value of a combo box on the subform & then run its AfterUpdate code. Due to the fact that the subform properties & methods are not exposed, in the Form_Current event on the main form you might use this:

Forms!MyForm!TheSubformControl.Form!MySubformCombo = DCount("*","MyTable")


But wait - that doesn't work (yet). Why? The answer is quite simple: Scope.

By default, code in standard modules are declared as Public, so they can be accessed from anywhere in the application. But code-behind in forms is declared as Private. So in the subform code, all you have to do is change this

Private Sub MySubformCombo_AfterUpdate()
End Sub

to this:

Public Sub MySubformCombo_AfterUpdate()
End Sub

...and it will work as expected.

Saturday, February 28, 2009

Compact and Repair Database

MS-Access files tend to grow over time, for a variety of reasons such as
  • adding many new records
  • large number of user edits
  • running action queries (insert, update, delete)
  • design changes, especially to forms & reports
  • temp tables that are emptied & repopulated
  • using the Name AutoCorrect options
  • Access replication
  • WorkGroup Security with encryption
The theoretical design limit of an Access mdb file is 2 GB, but as you approach 80-100 MB the performance & stability will begin to suffer; above 250 MB the file can become barely useable. This is especially significant when the file is located on the network & you have multiple users accessing that file. Plus, many anti-virus programs will aggressively scan mdb files.

NOTE: It is essential for all users to exit the database, and to make a backup copy of the file before performing the following...

One solution is to do regular maintenance on the database. In Access versions thru 2003, you can use

Tools ~ Database Utilities ~ Compact & Repair Database

which will in most cases shrink the file considerably. The typical reduction can be 10-20%, but can much greater if the Compact has not been done for a long time. In fact, I've seen one case where there was 100:1 reduction for a file that had never been compacted.

An alternate solution is to turn on the option for Auto-Compact, using

Tools ~ Options ~ General, and check the box Compact on Close

...however this option may crash the app if you are running on Terminal Server and/or using Access replication and/or the Access runtime version.

So, how often to perform this task? Good question. It depends on the list of reasons above. You could actually write some code to check the file size, compare it to some arbitrary limit, and then alert the user if the size exceeds that limit. Here is some sample code to check the file size:

Function GetMyFileSize() As Long
' this requires a reference to
' Microsoft Scripting Runtime
' c:\windows\system32\scrrun.dll
Dim fso As FileSystemObject
Dim oFolder As Folder, oFile As File
Set fso = CreateObject("Scripting.FileSystemObject")
Set oFolder = fso.GetFolder(CurrentProject.Path)
Set oFile = oFolder.Files(CurrentProject.Name)
GetMyFileSize = oFile.Size
Set oFile = Nothing
Set oFolder = Nothing
Set fso = Nothing
End Function

Sunday, February 15, 2009

Access replicas and runtimes

We recently completed a lengthy project, migrating a replicated MS-Access 2003 application to a client/server model using Access as the front-end and SQL Server 2005 as the database. The application had previously been modified as necessary to utilize the Access 2003 runtime on terminal server.

There are a few features in the app that display query results in a datasheet view, and every one of those queries (even queries that are not updateable) is opened as read-only to prevent accidental changes, for example:

DoCmd.OpenQuery "MyQuery", acViewNormal, acReadOnly

Since the Access runtime leaves out all the built-in toolbars, a custom toolbar was added to allow the user to filter & sort the data for analysis. After the user completes their analysis & closes the query, any filter or sort is discarded by the runtime, so subsequent views return to the predefined values. At least, that is what we expected.

Final testing of the client / server version of the app revealed that query filter / sort operations, and even changing column widths, upon close would prompt to the user "save design changes?" how could that happen with the runtime & the read-only status?

As it turns out, the answer is quite simple. In the replicated system, changes could only be made in the design master. Any filter / sort etc. in the replica disappeared when the query is closed. However, the new front-end is not replicated, and so although the Access runtime prevents access to the full design environment, we are exposing the filter / sort capability to the user & therefore they can persist changes to their query.

Now, is that defect or a feature? Each user has their own copy of the front-end file, so they are now able to customize their vision of each query for their needs. The data itself continues to be protected due to the read-only status. That sounds like a feature.

Thursday, February 05, 2009

Identity Crisis, part 2

Talk about being lucky.

We reviewed the tables that were almost out of headroom as far as PK identity values hitting the limit of an int, and found that for the two tables that were almost out of room, the PK's were not FK's but were simply in there to satisfy the basic rule that every table should have a primary key.

These tables had only 20-25,000 records; it was only the fact that they were migrated from a replicated Access database that resulted in the very large autonumbers.

Therefore, for both tables we could just go in there & drop the existing identity column & add a new identity column with a seed of 1 and increment of 1. Problem solved.

Monday, February 02, 2009

Identity Crisis

I am working with a SQL Server 2005 database that was upsized from a replicated Access 2003 database. We used the SQL Server Migration Assistant for Access 2003.

Because the Access file was replicated, the Autonumber fields were set for Random, as opposed to Increment, so the resulting IDENTITY fields in SQL are spread across the range of an int, which is +/- 2 billion give or take. However, for the SQL database the SSMA set them to Increment, and so new records in some of the tables are approaching the upper limit of an int. Even worse, Access does not recognize bigint which would have been a quick fix; when I tried that all the fields of the table displayed #deleted.

My goal was to find out how much "headroom" there was IOW how many records could be added before reaching the int limit. I found this function that helped to answer my question:


which returns the next identity value for the table. This is not for production, since users can be hitting the db all the time, but it gave me the current highest value. Then, I could calculate how many records could potentially be added to the table. Which in some cases wasn't very much.

Now I need a solution for my identity crisis.

Saturday, January 31, 2009

SSMA - the Memo bug - part 2

My last post discussed a method for changing the data type of a field in SQL Server. My suggestion was to use the SSMS table designer to create the script for you, but there is a faster easier way provided that you can write some basic T-SQL.

Keep in mind that the design requirement is to create a script to apply the change, because this same change needs to be applied to three different databases with the exact same result. If you were going to do this on a one-off basis then you could just go in there & make the change.

The new approach mimics what I would do in MS-Access:
  • Create a new field with the desired data type
  • Copy all the data from the existing field into the new field, while making any conversions or adjustments.
  • Delete the original field
  • Rename the new field to match the original name

Here is a sample of the T-SQL script. The existing field MyField in the table MyTable is being changed from nvarchar(max) to nvarchar(4000):

ALTER TABLE dbo.MyTable ADD Tmp_MyField nvarchar(4000) NULL
UPDATE dbo.MyTable SET Tmp_MyField = CONVERT(nvarchar(4000), MyField)
EXECUTE sp_rename N'dbo.MyTable.Tmp_MyField', N'MyField', 'COLUMN'

The only shortcoming of this approach is that the "renamed" field will move to the last position in the table, so if that matters in your application then the SSMS script method is the better choice.

In my case, the app didn't care about the ordinal position of the field. Plus, I had 59 fields scattered in 25 different tables that needed conversion, so I wrote some VBA to loop thru the TableDefs in Access & create the change script for me. If you're interested you can view the VBA module here: SQL Memo Fixup.

PLEASE NOTE that if you have Memo fields in your Access database, you will need to run this code to create this script in Access before you run the SSMA conversion, and then apply the script to the SQL database after you run the conversion.

Wednesday, January 28, 2009

SSMA - the Memo bug

I'm working with the Microsoft SQL Server Migration Assistant to convert an Access 2003 db to SQL Server 2005. Just today I found another little bug.

In Access you can have a Memo field that holds up to 64k characters, give or take. This is great for adding comments and notes.

The SSMA converts those as nvarchar(max) and correctly moves the data, however when linking to it using the SQL Native Client, Access thinks it is a text(255) field and displays only the last 255 characters of the field. Not good.

The fix is that you have to change the data type to nvarchar(4000) and relink; Access will then retrieve the entire field (up to 4,000 chars of course). In the event that you have Memo fields that exceed 4,000 characters - good question. You might want to consider saving those on the file system & storing their locations in SQL.

The safest way to make the table change:

  • using SSMS, open the table in design view
  • make the change but don't save it
  • click the script icon to save the table change as a T-SQL file
  • close the table without saving the change
  • make a backup of your database - !
  • apply the script
  • relink the Access front-end
  • test

Saturday, January 17, 2009

Time Sheets

I am working on an MS-Access time & billing program for consultants. This is an old program that was originally designed in Access 2.0, and over the years it was converted to 97 and then 2000, and finally was split into separate front end / back end files.

One of the new features requested is to allow the consultant to view their hours in comparison to the standard 8-hour day and 5-day week. For example, if they work 6 hours on Monday, then 10 on Tuesday, then 7 on Wednesday, the program should show 23 hours worked vs. the standard 24 hours (3 days x 8 hours) and indicate that they are 1 hour short. This required a few new functions so I'd like to share those.

First of all, I needed to filter the dates to the current week. For that I created two functions, fairly straightforward, as follows:

Function Get_StartOfWeek() As Date
Dim d As Date
d = Date
Do While DatePart("w", d, vbSunday) > 1
d = DateAdd("d", -1, d)
Get_StartOfWeek = d
End Function

Function Get_EndOfWeek() As Date
Dim d As Date
d = Date
Do While DatePart("w", d, vbSunday) < 7
d = DateAdd("d", 1, d)
Get_EndOfWeek = d
End Function

...and then the WHERE clause in the query looks like this:

WHERE [WorkDate]>=Get_StartOfWeek() And [WorkDate]<=Get_EndOfWeek()

The next step was a bit of a challenge, because the consultant might work on a Sunday or Saturday and those hours need to be included in the weekly totals, but not affect the "standard week" hour calculation. My solution was to use the Choose() function which then allowed me to set the daily expectations according to the day of the week. This is used inside a query, so I could not use the vbSunday constant, that's why the "1" is hard-coded into the DateDiff() function:


You may ask, why are there only six items in the Choose arguments, when there are seven days in a week? The reason is that the Choose arguments are 1-based, so if the [WorkDate] is a Sunday, then DateDiff returns a zero and so Choose returns a Null. Which is fine in this particular scenario, because Sunday is not a normal workday.

If you need to load values for all seven days of the week, then you have to subtract 1 from the [WorkDate] in order to get a Sunday date to return item 1 from the Choose. You can't substitute vbSaturday (7) for the start of week in DateDiff; that would give the wrong answer. Here is the seven-day version: