Sunday, August 17, 2014

T-SQL Select Schema

Let's say that you have the same table that exists in two different schemas -

A.my_table
B.my_table

...and you want to write a query with a parameter so you can select from one schema or the other. Without resorting to dynamic SQL.

One possible solution is to write a UNION query that will deliver the data from one schema or the other, like this:

DECLARE @schema sysname
SET @schema = 'A'

SELECT *
FROM A.my_table
WHERE @schema = 'A'
UNION ALL
SELECT *
FROM B.my_table
WHERE @schema = 'B'

Here we are using the ALL keyword to make sure that we retrieve all records; normally a UNION will only retrieve unique records.

Of course, the DDL for both tables must match, or the UNION will fail; the number of columns must be the same, and the data types must be compatible.

This idea was actually applied to an SSRS report - for which, the data source must be either T-SQL text or a stored procedure.

Saturday, October 20, 2012

Programming Tab Controls

MS-Access tab controls offer a method to present multiple groups of similar data, making it easy for user to locate specifics within an intuitive & user-friendly layout - very much like an Excel workbook. The main control is of course called a Tab Control, and the tabs within it are called Pages.

For example, you might have a main form where you select a customer, and then you have a tab control that has pages that display related data for such things as Contacts, Sales, Invoices, Payments, Credit Data and so on. Usually each tab has a subform that contains the data related to the current selection from the main screen.

However, when you access the tab using VBA, there are few variations that are not exactly clear in the online documentation.

For starters, when you place a control on a tab, the Parent is actually the Main Form, not page of the the tab control. So if you have a main form called "main_form", a tab control named "MyTab", and several pages named Page1, Page2, and Page3 - and then - you place a command button Button1 on Page1, the full reference to that control is

Forms!main_form!Button1

...note that the Page number has no effect on that. Therefore, when you have a form with one or more tabs controls, you must be careful to use a unique name for each of the controls, whether on the main form, or on any of the tabs.

If you wish to detect which tab has been clicked, you will use the OnChange event; in this example when you click the first tab Page1 which happens to have a page index of 1, the MyTab value will be 1 -- so to get the name of the page you will need to use

Private Sub MyTab_Change()
With Me
MsgBox .MyTab.Pages(.MyTab).Name
End With
End Sub

to get the name of the page that was clicked.


Sunday, September 09, 2012

SQL - list columns in a table, updated, again

The code has been updated again, this time instead of using sp_columns we are using the system tables directly. This provides access to additional data, such as whether the column is computed. Plus, in the event that Microsoft changes sp_columns - that would not break our code.

One significant change since the previous January 2012 version is that option 4 now provides a skeleton for an insert stored procedure - all of the column names are provided with data types in a DECLARE statement as parameters, followed by the INSERT for column names, and finally the SELECT statement that references the parameters. In this option, the code is excluding non-updating columns such as identity, timestamp, image, or any computed columns.

In all prior versions, we bracket [] any columns that have non-alphanumeric characters such as spaces or hyphens. In order to accommodate the #4 option, we have a large REPLACE statement to convert those to underscores. If you have any other characters to exclude, you can edit the REPLACE (which occurs in two places) to make the corrections.

EDIT ON 10/20/2012 - this new rewrite of the wm_columns will not work on views, because sp_tables actually pulls in both tables and views; where of course sys.tables only looks at tables. Hmmm. I will probably add one more option to this, so views will be back in the list.

Thursday, January 12, 2012

SQL - list columns in a table, updated

Way back in April 2009, I had published the original version of my stored procedure wm_columns which can be used instead of sp_help or sp_columns to view a list of column names for a table or view.

The new version has a number of enhancements, and you can view the complete code here.

The first parameter is required, which is the name of a table or view, such as:
wm_columns 'my_table'
...and the default schema is dbo

You may also include the schema, e.g. in AdventureWorks you can use this:
wm_columns 'Person.Contact'

The second parameter is optional, and has the following choices:

0 = display the column list in physical order
1 = display the column list in alphabetical order
2 = display a select statement
3 = display an insert statement
4 = display an insert statement, with data types added as comments

For the options 2, 3, and 4, column names are bracketed with [] if they contain spaces or other non-alphanumeric characters.

For the options 3 and 4, any identity or timestamp columns are displayed but commented-out.

The third parameter, when used with option 0 or 1, allows you to apply a fuzzy filter to the column name, for example:

wm_columns 'my_table',0,'id'
...will list only the columns that contain 'id' in the name.

The third parameter, when used with option 2, allows to you use an alias for the table, for example in AdventureWorks:

wm_columns 'Person.CountryRegion',2,'cr'
...will display a select statement like this:
SELECT
cr.CountryRegionCode,
cr.Name,
cr.ModifiedDate
FROM
Person.CountryRegion cr

Thursday, June 30, 2011

SQL Timestamp and Access

The timestamp field in a SQL Server table holds a unique value that is updated every time a record is updated. In a recent post in a SqlServerCentral forum, there was a suggestion that when you are using MS-Access as the front-end, you should add a timestamp column to every linked table - and thus prevent write-conflict messages.

If you use the SQL Server Migration Assistant (SSMA) to convert your Access database, it will automatically add a column SSMA_timestamp to every table that has either a floating-point field (Single or Double) or has a Boolean field that allows nulls. This is supposed to help SQL Server manage updates to records. For more details you can refer to the Microsoft article Move Access data to a SQL Server database by using the Upsizing Wizard.

Now, if you want to follow the original suggestion, but some tables already have a timestamp, you could write a script to generate the update scripts, like this:

SELECT
'ALTER TABLE [' + OBJECT_NAME(t.object_id) + ']
ADD my_timestamp TIMESTAMP'
FROM sys.tables t
WHERE NOT EXISTS
(
SELECT 1
FROM sys.columns
WHERE system_type_id =
(
SELECT system_type_id
FROM sys.types
WHERE name = 'timestamp'
)
AND object_id = t.object_id
)
AND t.type = 'U'
ORDER BY 1

...and when you run this, you get one ALTER TABLE statement for each table that does not have a timestamp column. NOTE that this does not change the tables, it only creates the scripts to make the table changes. Then, you can review the scripts & remove any tables (e.g. staging or import tables) where adding a column might break the application. This also documents the changes being made. When you actually execute the script, it will add the timestamp columns as needed.

(BTW in the statement "ADD my_timestamp TIMESTAMP" I've left out the "NOT NULL" clause, simply because the timestamp data type by definition cannot be NULL.)

On the other hand, if you take a look at SQL Timestamp field for Access use there is a discussion of how the timestamp can actually cause write-conflicts if you have a form / subform combination that uses columns from the same table, which is not an uncommon practice.

So, before you make radical changes to your database, remember to backup everything, and run some test cases including measuring response times - this gives you a baseline after which you can apply the mods & evaluate the results.

Sunday, May 15, 2011

T-SQL Computed Columns, part 3

My last two posts described how to create a computed column, and how to control the data type and size.

In the first example, the computed column would be "A" if there is an approval date, otherwise it would be "".

The general rule is that the computed column can only refer to other columns within the same table. However, it is possible to use a scalar function to perform a lookup from another table. This might have a dramatic effect on performance, depending on table size, row count etc. but it is an option according to the BOL.

So then, what if that approval date actually resides in a different table?

Assuming that my_table has a primary key PK int, and other_table has the related column ID int, we could write a lookup function, like this:

CREATE FUNCTION dbo.fnLookup
(
@PK int
)
RETURNS varchar(1)
AS
BEGIN
DECLARE @Lookup varchar(1)

SELECT @Lookup =
CASE
WHEN approval_date IS NOT NULL
THEN 'A'
ELSE ''
END
FROM other_table
WHERE ID = @PK

RETURN @Lookup
END

...and then use that function in our computed column, like this:

ALTER TABLE my_table
ADD A_display
AS
dbo.fnLookup(PK)

Again, this could have an impact on the performance - any query run against my_table will execute the function for every row - but this is one more option for computed columns.

A better solution might be to create a column in my_table to hold the A_display data, and then create a trigger on other_table that would keep A_display in sync. We'll explore that in a future post.

Saturday, May 07, 2011

T-SQL Computed Columns, part 2

My previous post presented the basics of using computed columns in SQL Server. Here we have a slightly more complex example that also deals with data types and sizes.

I have a table of locations in the city - each location has a direction, street, and cross street, such as

NB Austin Blvd @ Cermak Rd
SB Austin Blvd @ Cermak Rd
EB Cermak Rd @ Austin Blvd
WB Cermak Rd @ Austin Blvd

If you examine the list, you see that all four locations are actually the same street intersection - Austin and Cermak - and in the application (MS-Access) we need the ability to sort by the intersection, displayed in a format like street @ cross_street.

The original app used an expression in the query to place the streets in alphabetical order, like this:

IIf([Street] > [Cross_Street],
[Cross_Street] & " @ " & [Street],
[Street] & " @ " & [Cross_Street])

...and the result for all four locations would be:
Austin Blvd @ Cermak Rd
which provides the right answer & works fine, but takes a lot of typing, so it was eventually converted to a VBA function.

Now that the data has been moved into SQL Server, we can avoid using IIf() or VBA functions, which use resources in the GUI, and instead take advantage of the power of T-SQL.

We can start by creating a computed column like this:

ALTER TABLE
my_table
ADD
Intersection_Name
AS
CASE
WHEN Street > Cross_Street
THEN Cross_Street + ' @ ' + Street
ELSE Street + ' @ ' + Cross_Street
END

...which gives the correct result; however, Street and Cross_Street are both nvarchar(255) so our calculated column comes out as nvarchar(513) - why? it is the sum of the sizes of the two columns, plus the ' @ ' expression - and MS-Access regards that as a Memo field, so we cannot sort by that.

(Street and Cross_Street were originally defined as Text(255) in MS-Access, but the data is never that long - 100 would have been a better choice; but that's a topic for another day.)

We could force the computed column to nvarchar(255) to accommodate the limitations of MS-Access, like this:

ALTER TABLE
my_table
ADD
Intersection_Name
AS
CAST(
CASE
WHEN Street > Cross_Street
THEN Cross_Street + ' @ ' + Street
ELSE Street + ' @ ' + Cross_Street
END
AS nvarchar(255)
)

...but now we have a potential bug - we don't expect this will ever happen - but - if the concatenation is ever more than 255 characters, the CAST will fail. The solution is to shorten the street names to 126 characters, so that 126 + 3 + 126 = 255 to ensure the computed column will never fail:

ALTER TABLE
my_table
ADD
Intersection_Name
AS
CAST(
CASE
WHEN Street > Cross_Street
THEN LEFT(Cross_Street, 126) + ' @ ' + LEFT(Street, 126)
ELSE LEFT(Street, 126) + ' @ ' + LEFT(Cross_Street, 126)
END
AS nvarchar(255)
)

One final note - when you work with computed columns, it is not possible to ALTER a computed column; in order to change it you must DROP the column & then ADD the new column definition. As a result, the column to move to the last position in the table.

Saturday, April 23, 2011

T-SQL Computed Columns

A Computed Column in SQL Server allows you to create a read-only calculated field that performs some logic but occupies no space in the database.

We have an Access front-end with a SQL Server database & I was looking for ways to improve the user experience. I noted that the query behind one of the forms opened "OK" but scrolling thru the records was a little sluggish. The form itself is a continuous form. I suspected that an IIf() statement was the culprit.

The IIf() statement drives a control that displays "A" if the record has an approval date, like this:

=IIf(IsNull([approval_date]),"","A")

I created a computed column in the SQL table using this statement:

ALTER TABLE my_table
ADD A_display
AS
CASE WHEN approval_date IS NOT NULL
THEN 'A'
ELSE ''
END

...and then in the application, I replaced that IIf() statement with the new column A_display; thus duplicating the Access statement. This did speed up the query quite a bit.

Now, looking back at the modified table, I can see that A_display shows as

varchar(1) NOT NULL

...which is fine for my purpose. But you can modify that behavior by adding to the statement. For example, we might want the output to be nullable - so if you set the control's background property to Transparent (instead of Normal) then you can see through it, if there's no approval date.

To make the column nullable, we could just change the original logic like this:

CASE WHEN approval_date IS NOT NULL
THEN 'A'
ELSE NULL
END

...but if your statement is more complex, you might use NULLIF:

NULLIF(
CASE WHEN approval_date IS NOT NULL
THEN 'A'
ELSE ''
END
,'')


You can also control the data type and size by using CAST:

CAST(
CASE WHEN approval_date IS NOT NULL
THEN 'A'
ELSE
NULL
END AS nvarchar(1)
)

which gives you: nvarchar(1) NULL

Friday, January 28, 2011

sp_executesql

The system stored procedure sp_executesql can be used to run dynamic sql. It also provides a way to use input and/or output parameters. However, the BOL examples seemed to be mighty complicated so I put this together for a quick reference.

For the example, we have a simple table "Cities" like this:

City nvarchar(50),
ST nvarchar(2),
County nvarchar(50)

If all you want to do is to look up the County for Phoenix AZ you could just write

SELECT County
FROM Cities
WHERE City = 'Phoenix'
AND ST = 'AZ'

To make this more versatile, you could write a stored procedure like this

CREATE PROC Get_County
(
@City nvarchar(50),
@ST nvarchar(2)
)
AS
SELECT County
FROM Cities
WHERE City = @City
AND ST = @ST
GO

However, pulling data from from a stored procedure might require that you create a temp table and then use the INSERT...EXEC syntax, which does work, but nesting of sp's is limited.

If we turn this into dynamic sql, we can feed in different parameters. It does seem a bit complicated but it is flexible once you get it set up...

DECLARE
@County nvarchar(50),
@County_out nvarchar(50),
@City nvarchar(50),
@ST nvarchar(2),
@sql nvarchar(4000)

SET @sql = 'SELECT @County_out = County FROM Cities WHERE City = @City AND ST = @ST'

EXEC sp_executesql
@sql,
N'@City nvarchar(50), @ST nvarchar(2), @County_out nvarchar(50) OUTPUT',
@City = 'Phoenix',
@ST = 'AZ',
@County_out = @County OUTPUT

SELECT @County AS County

So the steps are:
+ declare all of your input and output variables
+ declare a variable that's used locally for the lookup value
+ 1st argument = the sql string
+ 2nd argument = comma-separated list to declare of all of the in/out parameters
+ next argument(s) = input parameter(s)
+ last argument = output parameter

Saturday, December 18, 2010

SSRS and CSV format

Using SQL Server 2005 Reporting Services, I created a timed subscription to save a report as a CSV file to a Windows file share. However, the user was unable to import the file into their salesforce.com application.

Turns out that the default export format for CSV uses Unicode encoding, so if you open the file with Excel, each row of the data ends up in the first column.

Luckily SSRS has a file rsreportserver.config where you can add or modify the export formats. The config file is an xml file which in my case was located at

C:\Program Files\Microsoft SQL Server\MSSQL.3\Reporting Services\ReportServer

(note: the wysiwyg on blogger has issues with angle brackets, so I am showing square brackets below)

Locate the section [Render] and you will find entries for all the formats; find the line that begins with [Extension Name = "CSV" and add the Configuration, Device Info, and Encoding:

[Render]
[Extension Name="CSV" Type=" ..."]
[Configuration]
[DeviceInfo]
[Encoding]ASCII[/Encoding]
[/DeviceInfo]
[/Configuration]
[/Extension]
[/Render]

Note: for the "Extension Name" entry, the default config file has a closing slash "/" that has to be removed for this to work.

After making this change, just to be safe I stopped and re-started IIS. After which it worked just fine, delivering an ASCII-encoded file that could be opening with Excel 2003 and imported into Salesforce.