Category Archives: Access

Strange error on closing a .mdb

If you are getting a strange error about collation sequence in Access 2010 when closing a .mdb file,
Something like “Selected collating sequence not supported with the specified file format”.

When you look this up you will find advice to set the collating sequence to general. However this will not work. And in some cases the database will become corrupt.

The most likely cause is a recent Service Pack to Access, SP1. The latest Service Pak of Access 2010 appears to have a bug in the compact on close feature. The cure is to turn this off.

A begging letter

The web is a massive thing. You’ll find all sorts on here, good and bad, in relative measure of course. There are those what would consider blog on Access bad.

I’ve heard about inbound marketing. What it is exactly I am not sure. It sure does fit for a introverted person like myself, I can even speak with my parents for all that long!

But does it not all sound like inbound marketing is a new way of writing a work begging letter. It is a way to attract people who are looking for someone to find you. I thought if I blog on Access topics people will find me. So far it hasn’t worked out that way. Maybe they are coming here – though from stats they aren’t – and taking the knowledge.

Any inbound marketing tips and sites are welcme. I’ve used up my book budget this year so won’t be able to buy any of those.

Evaluate Lightswitch

Yesterday, well actually two days ago on the 27th of July. Microsoft released Visual Studio for Lightswitch.

Lightswitch is a very lightweight tool for building essentially forms applications in Silverlight which can be run anywhere. On your desktop, in the cloud accessing remote databases in SQL Azure or SQL Server.

Microsoft has graciously allowed those with a MSDN subscription to be included. So I have installed it and gave it a quick whirl. I was impressed with just how quick it can be to setup a very simple application. But will it replace Access? I think not – it is for an entirely different market. It probably is better than attempting to use the Access Services on Sharepoint.

I’ll be posting more on this in the next few weeks. Why not give it a try youself with the 90 day trial.

Make sure the version match

Recently I had an interesting experience with a client. I was updating a client’s access application which was in Access 2003 format. I made some additions to this application and as usual made a mde file.

My client then reported having problems with the mde. On his machine which had Access 2010 installed, the mde opened and worked fine. On his staff machine which had Access 2003 installed the database failed to open. It gave the message (sic):

Unable to open the file. It appears to be in a newer version. …

Even though the format is of the same version, the mde file was incompatible because it was created with a newer version. So the version of access on the destination machines must match the version which was used to create the mde file.

I’m sure this will cause many a nightmare for developers where clients are using older versions of Access. I find the best way to handle this is to use virtual machines. One for each version of OS and Access that your clients use.

Loading in CSV files with variable columns

In certain specialised application in Access there will be features allowing users to specify, say, a custom hierarchy. Mainly these are used for reporting so amounts can be rolled up. Typically these are synchronized with the source data which will come from a external source. Now this can mean that the import/export specs in the Access front end will need to change.

What if you wanted to avoid that? How would you import in a CSV file with the new data layout into the database without having to change a import spec or a line of code!

The trick is, of course, to use the meta-data (data about data) to control and validate the input. And using the ability of DAO to open a recordset on a file along with schema data this can be done fairly easily. Here’s one way that I did it.

DAO has the ability to open a delimited file without using a import spec if there is a file named schema.ini in the same folder as the file being imported. We wouldn’t really want our users to manipulate this file so we’ll need to create it using code.

The file looks like this:

Format=CSVDelimited | TabDelimited | Fixedwidth
Col<n>=<ColName> <datatype>

A sample file looks like so

Col1=LName  char width 20
Col2=FName char width 20
Col3=DOB date
Col4=Children integer

Detailed information on schema.ini file contents can be found at msdn.

Sample code to create this file

dim oFS as new FileSystemObject
dim oFileStream as TextStream
dim nColNum as long

set oTextStream = oFS.OpenTextFile("c:\test\schema.ini", ForWriting, True)
oTextStream.WriteLine "[test.csv]"
oTextStream.WriteLine "Format=CSVDelimited"
oTextStream.WriteLine "ColNameHeader=True"

dim oRS as dao.recordset
'need to read our meta-data to get the names of the columns and datatypes
set oRS = currentdb.openrecordset("select ent_name, ent_type from entities")

nColNum = 1
while not oRS.EOF
     oTextStream.Writeline "Col" & nColNum & "=" & oRS.Fields("ent_name") & " " & oRS.Fields("ent_type") & iff(oRS.Fields("ent_type")="width 50"
    nColNum = nColNum +1

You obviously wouldn’t use hard-coded names and such, but I was in a rush writing this. As well you’d probably want to store more information on text widths.

This can then be imported using the docmd.transfertext command. However, if you need to run through the file like a recordset then you can use DAO to open a recordset on the file.The SQL to open the file with DAO looks like this.

select <FIELDS> from [Text;FMT=Delimited;HDR=YES;DATABASE=<FILEPATH>;].[<FILE>#<EXT>];

<FIELDS> is typically set to *
<FILEPATH> is the folder the file resides in
<FILE> is the filename without extension, <EXT> being the extension.

To open such a recordset all you then do is

dim oRS as dao.recordset
set oRS = currentdb.OpenRecorset("select * from [Text;FMT=Delimited;HDR=YES;DATABASE=c:\test;].[test#csv]")

And then you can process the data in the file. I used this technique to great success when processing variable data that would end up in a master-detail relationship. The first few columns were known and fixed, the remained stored in meta-data. These remaining columns were then pivoted at load time and then the data merged into the existing data in the database table.

I hope you can all make use of this feature.