Why is versioning a feature dataset containing large amounts of data (e.g. a geometric network or parcel fabric) very slow in an Oracle geodatabase?
When versioning a feature dataset in an Oracle geodatabase, statistics are automatically gathered on tables and indexes that have no statistics, before the main versioning operation begins. By default this statistics gathering operation can be a very time-consuming process. It may be preferable to manually create statistics on the feature dataset before attempting versioning.
Using the following statistics gathering methods on large feature datasets have been observed to be very slow:
1. Analyze datasets in ArcToolbox. (Very slow.)
2. EXECUTE DBMS_STATS.GATHER_SCHEMA_STATS(‘USER’, estimate_percent=>100, cascade=>TRUE). (Very slow.)
3. EXECUTE DBMS_STATS.GATHER_SCHEMA_STATS(‘USER’, estimate_percent=>10, cascade=>TRUE). (Very slow on some tables.)
4. GATHER_TABLE_STATS. (Very slow on some tables.)
The following statistics gathering method was found to have the best performance:
ANALYZE TABLE <Table Name> ESTIMATE STATISTICS SAMPLE 10 PERCENT;
ANALYZE INDEX <Index Name> VALIDATE STRUCTURE;
Performing this operation takes a few seconds per table/feature class.
Versioning after doing this procedure takes only a few minutes (compared to possibly many hours if this step is not performed).