Categories
Access mde mismatch versions

Make sure the version match

Recently I had an interesting experience with a client. I was updating a client’s access application which was in Access 2003 format. I made some additions to this application and as usual made a mde file.

My client then reported having problems with the mde. On his machine which had Access 2010 installed, the mde opened and worked fine. On his staff machine which had Access 2003 installed the database failed to open. It gave the message (sic):

Unable to open the file. It appears to be in a newer version. …

Even though the format is of the same version, the mde file was incompatible because it was created with a newer version. So the version of access on the destination machines must match the version which was used to create the mde file.

I’m sure this will cause many a nightmare for developers where clients are using older versions of Access. I find the best way to handle this is to use virtual machines. One for each version of OS and Access that your clients use.

Categories
Access csv dao delimited file import schema.ini

Loading in CSV files with variable columns

In certain specialised application in Access there will be features allowing users to specify, say, a custom hierarchy. Mainly these are used for reporting so amounts can be rolled up. Typically these are synchronized with the source data which will come from a external source. Now this can mean that the import/export specs in the Access front end will need to change.

What if you wanted to avoid that? How would you import in a CSV file with the new data layout into the database without having to change a import spec or a line of code!

The trick is, of course, to use the meta-data (data about data) to control and validate the input. And using the ability of DAO to open a recordset on a file along with schema data this can be done fairly easily. Here’s one way that I did it.

DAO has the ability to open a delimited file without using a import spec if there is a file named schema.ini in the same folder as the file being imported. We wouldn’t really want our users to manipulate this file so we’ll need to create it using code.

The file looks like this:


[<filename>]
Format=CSVDelimited | TabDelimited | Fixedwidth
ColNameHeader=True|False
Col<n>=<ColName> <datatype>

A sample file looks like so


[test.csv]
Format=CSVDelimited
ColNameHeader=True
Col1=LName  char width 20
Col2=FName char width 20
Col3=DOB date
Col4=Children integer
...

Detailed information on schema.ini file contents can be found at msdn.

Sample code to create this file


dim oFS as new FileSystemObject
dim oFileStream as TextStream
dim nColNum as long

set oTextStream = oFS.OpenTextFile("c:\test\schema.ini", ForWriting, True)
oTextStream.WriteLine "[test.csv]"
oTextStream.WriteLine "Format=CSVDelimited"
oTextStream.WriteLine "ColNameHeader=True"

dim oRS as dao.recordset
'need to read our meta-data to get the names of the columns and datatypes
set oRS = currentdb.openrecordset("select ent_name, ent_type from entities")

nColNum = 1
while not oRS.EOF
     oTextStream.Writeline "Col" & nColNum & "=" & oRS.Fields("ent_name") & " " & oRS.Fields("ent_type") & iff(oRS.Fields("ent_type")="width 50"
     oRS.movenext
    nColNum = nColNum +1
wend
oRS.Close
oTextStream.Close

You obviously wouldn’t use hard-coded names and such, but I was in a rush writing this. As well you’d probably want to store more information on text widths.

This can then be imported using the docmd.transfertext command. However, if you need to run through the file like a recordset then you can use DAO to open a recordset on the file.The SQL to open the file with DAO looks like this.


select <FIELDS> from [Text;FMT=Delimited;HDR=YES;DATABASE=<FILEPATH>;].[<FILE>#<EXT>];

<FIELDS> is typically set to *
<FILEPATH> is the folder the file resides in
<FILE> is the filename without extension, <EXT> being the extension.

To open such a recordset all you then do is


dim oRS as dao.recordset
set oRS = currentdb.OpenRecorset("select * from [Text;FMT=Delimited;HDR=YES;DATABASE=c:\test;].[test#csv]")

And then you can process the data in the file. I used this technique to great success when processing variable data that would end up in a master-detail relationship. The first few columns were known and fixed, the remained stored in meta-data. These remaining columns were then pivoted at load time and then the data merged into the existing data in the database table.

I hope you can all make use of this feature.

Categories
consumption electricity green power virtualisation

Why server virtualisation is important

The other day our electricity bill arrived and uh, well it was quite large. It quite shocked us. And it also showed us that we were using a lot more power than we were last year. I began to think what had changed which so increased our power consumption over the last year.

In the past year I had just started to leave the servers, all five of them, on 24/7. I am hypothesising that that is the difference. I don’t have a energy meter on each server but some have 4 fans and five hard disks. In theory they should spin down over time, but whenever I’ve checked it hasn’t happened. And with a small office the servers are only utilized 25% of time, max.

So now I have begun a crusade to reduce our power consumption. I’ve already shutdown the Asterisk server and have all phones going direct to the VOIP provider. I have shutdown the Windows server which hadn’t been used in two months. I moved services from the backup server to the NAS server. I configured the backup server to turn off at 8am everyday – I just have to remember to turn it on before I go.

Now I need to move services from my webmail server to the NAS machine. That will make one machine responsible for DNS, DHCP, LDAP, Samba, e-mail, IMAP, WEBDAV and Squirrel mail.

When I need a server turned on, to do Oracle or Umbraco or SQL Server testing, then I’ll just turn it on when needed. I don’t know what possessed me to build so many servers…no real use is it? This is why virtualisation is so important. Under utilised machines now become utilized and less servers means less power.

Glad I thought of it! (lol) Just being cheeky.