Categories
MS Access oracle roles

Using roles and security with Oracle

Joining Access and SQL Server when it comes to security it really simple. Your user’s AD accounts are used by the SQL instance to set the appropriate permissions and job done. How does one achieve the same group (or role) based security when using Oracle.

Oracle doesn’t fit in with AD very well. There may be strategies that could allow Oracle to recognise these, or a synchronisation mechanism. But this isn’t a OID blog so I won’t get into that. I’ll assume that you have Oracle as a separate entity on your network.

Oracle has Roles which can be assigned to schemas and used in permissions. So if you have a schema which is used by hundreds of users you would assign each user into a role in their user profile. When that user signs in they only see what that role has been given permissions to. Users can have access to multiple roles but in general it will be only one.

As this is not an Oracle blog, I won’t discuss how to set the roles up and assign them to the objects. I’ll leave that as a lesson for the reader. But lets assume we have two roles, aptly named ROLE1 and ROLE2. No idea which has more privileges but it hardly matters.

The Access ODBC connection will be using an Oracle user to login to the database and how can you tie the Windows names and assign the proper roles when logging in. Oracle has the USERENV context information and the SESSION_USER attribute. This attribute identifies the user name on the Windows machine which has connected.

Then you’ll need a table to map those user names to the role they are to be assigned along with a little bit of code.



+------------+---------+
|USERID | ROLE_ID |
+------------+---------+
|sam | 1 |
|todd | 2 |
|harvey |-1 |
|gill | 1 |
+------------+---------+

The -1 would indicate a user that previously had permissions but those have been revoked.

To do the mapping you need this bit of PL/SQL code



create or replace function authRole() return number(1)
as
s_user varchar2(20);
i_role_id number(1);
begin
select upper(sys_context('USERENV', 'SESSION_USER')) into s_user from dual;
select role_id into i_role_id from Users where userid=s_user and role_id>0;

case i_role_id
when 1 then execute immediate 'set role ROLE1';
when 2 then execute immediate 'set role ROLE2';
end case;
return i_role_id;
exception
when NO_DATA_FOUND then raise_application_error(-20011,'User unauthorized');
when others then raise_application_error(-20011,'Unknown Exception in authRole Function');
end;

you could put this into a login trigger on the schema but you’ll want to remove the raise_application_error as they don’t come through when ODBC is connecting. (in my experience, anyway)

That’s the Oracle side, whew, now for the Access part.

I create a function which assigns the role – by calling the code above and then checking if there is access.



Function OraAssignRole()
On Error GoTo ERR_ORAASSIGNROLE
dim rs as DAO.Recordset
dim oQry as querydef

'Initiate the login by running a passthrough query to connect to oracle
set oQry = CurrentDb.QueryDef!qryPassthrough
oQry.sql = "{call SetTheRole}"
oQry.ReturnsRecords = True
oQry.Execute dbFailOnError
oQry.Close

'Made it here so the user is ok, get the role id for use in the app
glbl_role_id = rs(0)

Exit Function
ERR_ORAASSIGNROLE:
MsgBox "Error on login, You may not have rights. Application will close now."
Application.Quit
End Function

And that’s the end of it. Place a call to this in the autoexec macro and your application will login to Oracle, set the role, and if the user is not found – close down the application.

I hope you found this post useful.

Categories
bulk load oracle pipelining pl/sql sql

The mighty Oracle pipeline clause

Several months ago I was working on a problem with one of my clients. This involved taking a blob of data and performing a transformation on it. (what else would you do with a database eh?) However the catch is that the transform is performed according to user defined rules which are dynamic. The user’s have the ability to add/remove/update these rules.

A rule consisted of several columns of sparse data. The first columns described the data pattern which caused a match and then the remained gave the split of the new data. Basically it took financial data and split it out by low level organisation. So a company can see how each element of cost/income was split over the business.

The rules matched based on a set of hierarchical ordering. The more pattern columns which contained data were matched first and you can determine that less columns matched later. This was controlled by a table which was used to sort the rules. More in detail below.

So how do you go about solving this problem? The first instinct is to simply perform a join across the data table and rules and sorting and then use a massive case to remove the duplicates and select the proper rule to insert into the result table. Fine yes that would work, but would consume vast amounts of temporary space as Oracle built the join and then sorted it and then selected the proper rule and row of data. In most instances several rows of data would be created (if not dozens) for each real data row and at most only one would pass through.

Still it seemed the fastest way to do it. The plan didn’t look too bad and so  tested it on some real data. It was slow, very slow. And occasionally it would cause an error. (OMG oracle has bugs!). So I had to find another way to get the job done, and fast.

I came upon an idea where I load the rules all into memory. Basically into a array with the rules sorted in the proper order. I could then take a sample of a a couple of columns of data – which were mandatory – and that would reduce the number of rules to compare down to a handful. I could then trim down to the single rule easily enough and return the rule id for further processing. As the rules were all in memory this should be really, really fast. Now to hook it up!

So I can load up the rules pretty fast using BULK LOAD facility of Oracle PL/SQL and keep them in memory for fast look up. But now how do I hook up that process with the data in the table and have it work as fast as possible. The secret here is two things. Pipelining and Parallel processing.

Pipelining is where a PL/SQL function can return results as each item is calculated and then Oracle can process them as part of a larger PL/SQL statement as it goes. Otherwise the DBMS would have to wait for the entire set of results to be returned before continuing. It sounds like perhaps in this situation this is not needed, but it is.

As the results of each function are independant of any other invocation having the function run in parallel could boost performance significantly. In my experience it does make a difference. You’ll need to get Oracle to use multiple cores or processors obviously for this to work in practise.

So, now I’ve got all the pieces and the PL/SQL looked something like this:

The SQL Statement:
insert into output( ENT_ID, VALUE_, RULE_ID )
      select /*+ parallel(data,2) */  aqsr.ent_id, data.value_*aqsr.ent_percent,
        ASM.RULE_ID
      from reslt_m data
        inner join
          table(GroupdData(CURSOR(select /*+ parallel(GD, 2) */ * from Groupd_Data GD ))) asm
          on data.cost_centre_id=asm.cost_centre_id and
           data.expense_type_id=asm.expense_type_id and data.measure=asm.measure_id and
           data.year_=asm.year_ and data.data_type_id=asm.data_type_id
        inner join rules_ratios aqsr on asm.rule_id=aqsr.UID_

and the pipelined function GroupdData

 function GroupdData(p_inpcur in ref_basedatacur_t) return G_GrpdData_tab
       PIPELINED PARALLEL_ENABLE (Partition p_inpcur by any)
 as
       /* definitions left out */

 begin  
         loop
            fetch p_inpcur bulk collect into l_groupify_rec limit 10000;
            exit when l_groupify_rec.count = 0;
          
            for l_idx in l_groupify_rec.first..l_groupify_rec.last
            loop
                  /* tedious code to check and find rules left out ..
                     fill in rule_id to return to query              */
                      l_rec.rule_id := rules(l_idx)(l_rul_idx).UID_;
                      pipe row(l_rec);
                      exit;
                    end if;
              end loop;
          end if;
          end loop;
      end loop;
      close p_inpcur;
      return;
   end;

And that’s all there is to it! Okay so I’ve left out some of the implementation nitty gritty details but you get the idea.

This code would appear to run slowly, but don’t be deceived it is very very fast. And I intend on using similar techniques in the future. That’s only if a single SQL statement is either too slow or problematic.

Categories
Ash cloud azure dnsmasq Domain Exim Linux oracle SQL Server

Its altogether LDAP

This past week I had two signifiant events. Well, three really. First I one a Innovation Voucher which will hopefully allow some research into a new product offering to occur. I say “hopefully” because so far all the NWDA has done so far is to ensure that my application meets some basic requirements. Then it will be up to some suppliers to show interest and see what happens from there.

Then I upgraded one my Linux Ubuntu machines to the latest Ubuntu. This took quite some time! The machine is quite a slow machine – its a Celeron 667Mhz. But it plays a critical role in the network as it runs DHCP, DNS, Backup, NAS, Samba and WINS. Overall the upgrade went through clean, but Ubuntu needs to ask a few more questions upfront – especially relating to config files. I would walk away from the machine, the display would turn off and then when I came back I was aprehensive to hitting a key lest I answer a critical prompt incorrectly. Maybe there is a way to keep the display on all the time. Another issue I ran into is the GDM login. I had turned this off. So users logging in at the console would login to text and then issue ‘startx’ if they wanted XWindows. The upgrade ignored that setting and reset the GDM login. I removed GDM from the startup but that still didn’t help. So I just removed all the Xstuff. And then I have to tell GRUB2 to go to a text screen.

Once all that is done and you’ve upgrade GRUB fully to GRUB2 the boot time is minimal. It really is fast!

The next event was changing ly local domain. I had a local domain which ended in .local. local is a public domain which, quite often would mess up my VPN users. And in certain versions of Linux, some utilities like ping will not work properly with a .local domain. So I decided to change it – to .localdom. My what a process. I had to change LDAP and all its config files and such. Surprisingly this was the easy part. Then all the ldap.conf files in all the Linux machines and the samba setup on the PDC, DNS settings (of course), my Exim setting – more on this below, Backuppc setting for e-mail domain.

Most things worked, except for the PDC. I determined through the logs that this was because the bind password was not reset. Once that was done things worked. That exposed a configuration issue with the way the NetBIOS browsing was being done. Apparently it is an absolute must that the PDC be the master browser. So back into smb.conf to correct. But nothing I did fixed the problem. Eventually I gave up and rebooted the machine. Things worked! Sigh. All that hassle and it was just a reboot needed.

Then, however a couple of days later I got complaints about e-mails bouncing. Upon enquiry with the receivers it seems the local domain name was being left in outgoing e-mails. The problem? The ldap query inside the exim config file which does that was not updated for the new base DN.

This week I’ve been looking into SQL Azure. This is Microsoft’s cloud version of SQL Server. Its quite basic to start but with Microsoft the best is usually yet to come. I’m sure this will be a very popular service. One that I may use in the future. I’ll be doing some of my data mining testing on it. Oracle does offer Oracle in the cloud, but its through Amazon ECC.