Esri does a great job documenting and explaining the purpose of the log files, the different configuration options, and when you may consider using one configuration over another. As a result, I won't go into too much detail on each of the available options.
ArcSDE log files are used to store temporary selection sets for your arcmap session. The number of rows in these tables can be very volatile. Esri goes into more detail here.
The different configuration options are:
SHARED - sessions using the same login will share the same set of persistent tables.
SESSION BASED - log file tables are created for each unique session that connects and are usually deleted when session terminates.
STAND ALONE - like session based but selection sets get their own table vs being lumped in one table.
POOL - pools of predetermined session based or stand alone logs to share for all incoming sessions.
Esri publishes the details on these different configuration options and when to use each here.
For each of the above option, you have to weigh costs in these four categories:
- required user permissions to implement
- level of monitoring and management required to implement
- additional costs to the database to implement
- performance
Stand alone log files don't work for read only users - so, for most of you, there is no point in considering this configuration option. Additionally, they require monitoring and management to ensure that the maxstandalonelogs ArcSDE server configuration parameter is set adequately.
A pool of log files, like stand alone log files, require monitoring and management. I have enough things to monitor and manage.
Session based log files are constantly creating and dropping tables. DDL can be expensive. Furthermore, the session based log file tables don't always get dropped, leaving you to clean them up.
Shared log files require more permissions up front (until at least 100 records are selected in the map) to create the files, but the tables are persistent. There are no thresholds to worry about like with using a log file pool or the stand alone log configuration.
The cons to using shared log files are:
- You may experience contention on the tables if users share the same login since they will be sharing the same segments for storing their temporary selection sets.
- The tables don't always get cleared out (delete from, NOT truncate since they are shared) once the session(s) terminate. I've seen log file tables with 10 million orphaned rows.
- If statistics are gathered on these tables, the potential for terrible execution plans and performance during things like reconcile is very high. The row counts in these tables are volatile. The selection set rows stored in the tables are joined to version query filters during operations such as reconcile. If the optimizer thinks there are 0 rows based on statistics and there are 4000 rows of selection sets in the tables, you may see a very expensive nested loops operation where there should be a hash join.
The remedy to all 3 of the above cons is to convert each users log file tables from permanent tables to global temporary tables, AND create the global temporary versions of the these tables for new users when the user is created in Oracle.
With global temporary tables:
- No more contention. Each user has their own "copy" of the table when sharing logins.
- When the session terminates, the global temp for that session is gone. period. No need to worry about orphaned row cleanup.
- You can't gather statistics on a global temporary table. Oracle will dynamically sample the table to determine the best execution plan, assuming you have not reduced the default optimizer_dynamic_sampling parameter for oracle 10g+ from 2.
- Additionally, and as an added bonus, less redo is generated with global temporary tables than persistent tables.
Esri documents this conversion to global temporary tables here.
If you are still reading this, you may be wondering if their is added cost introduced by the dynamic sampling of the log file tables, and if significant. To be honest, I've not yet done that research and testing. I can say that I've never seen the dynamic sampling of log files consume a lot of resources in the hundreds of level 12 traces or AWR reports I've reviewed. Perhaps I'll get a chance to test and post my findings in the future.