Categories
Access csv dao delimited file import schema.ini

Loading in CSV files with variable columns

In certain specialised application in Access there will be features allowing users to specify, say, a custom hierarchy. Mainly these are used for reporting so amounts can be rolled up. Typically these are synchronized with the source data which will come from a external source. Now this can mean that the import/export specs in the Access front end will need to change.

What if you wanted to avoid that? How would you import in a CSV file with the new data layout into the database without having to change a import spec or a line of code!

The trick is, of course, to use the meta-data (data about data) to control and validate the input. And using the ability of DAO to open a recordset on a file along with schema data this can be done fairly easily. Here’s one way that I did it.

DAO has the ability to open a delimited file without using a import spec if there is a file named schema.ini in the same folder as the file being imported. We wouldn’t really want our users to manipulate this file so we’ll need to create it using code.

The file looks like this:


[<filename>]
Format=CSVDelimited | TabDelimited | Fixedwidth
ColNameHeader=True|False
Col<n>=<ColName> <datatype>

A sample file looks like so


[test.csv]
Format=CSVDelimited
ColNameHeader=True
Col1=LName  char width 20
Col2=FName char width 20
Col3=DOB date
Col4=Children integer
...

Detailed information on schema.ini file contents can be found at msdn.

Sample code to create this file


dim oFS as new FileSystemObject
dim oFileStream as TextStream
dim nColNum as long

set oTextStream = oFS.OpenTextFile("c:\test\schema.ini", ForWriting, True)
oTextStream.WriteLine "[test.csv]"
oTextStream.WriteLine "Format=CSVDelimited"
oTextStream.WriteLine "ColNameHeader=True"

dim oRS as dao.recordset
'need to read our meta-data to get the names of the columns and datatypes
set oRS = currentdb.openrecordset("select ent_name, ent_type from entities")

nColNum = 1
while not oRS.EOF
     oTextStream.Writeline "Col" & nColNum & "=" & oRS.Fields("ent_name") & " " & oRS.Fields("ent_type") & iff(oRS.Fields("ent_type")="width 50"
     oRS.movenext
    nColNum = nColNum +1
wend
oRS.Close
oTextStream.Close

You obviously wouldn’t use hard-coded names and such, but I was in a rush writing this. As well you’d probably want to store more information on text widths.

This can then be imported using the docmd.transfertext command. However, if you need to run through the file like a recordset then you can use DAO to open a recordset on the file.The SQL to open the file with DAO looks like this.


select <FIELDS> from [Text;FMT=Delimited;HDR=YES;DATABASE=<FILEPATH>;].[<FILE>#<EXT>];

<FIELDS> is typically set to *
<FILEPATH> is the folder the file resides in
<FILE> is the filename without extension, <EXT> being the extension.

To open such a recordset all you then do is


dim oRS as dao.recordset
set oRS = currentdb.OpenRecorset("select * from [Text;FMT=Delimited;HDR=YES;DATABASE=c:\test;].[test#csv]")

And then you can process the data in the file. I used this technique to great success when processing variable data that would end up in a master-detail relationship. The first few columns were known and fixed, the remained stored in meta-data. These remaining columns were then pivoted at load time and then the data merged into the existing data in the database table.

I hope you can all make use of this feature.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.