FAQ: Why is versioning a large feature dataset in Oracle slow?


Question
Why is versioning a feature dataset containing large amounts of data (e.g. a geometric network or parcel fabric) very slow in an Oracle geodatabase?

Answer
When versioning a feature dataset in an Oracle geodatabase, statistics are automatically gathered on tables and indexes that have no statistics, before the main versioning operation begins. By default this statistics gathering operation can be a very time-consuming process. It may be preferable to manually create statistics on the feature dataset before attempting versioning.

 

Using the following statistics gathering methods on large feature datasets have been observed to be very slow:

1. Analyze datasets in ArcToolbox.  (Very slow.)

2. EXECUTE DBMS_STATS.GATHER_SCHEMA_STATS(‘USER’, estimate_percent=>100, cascade=>TRUE).  (Very slow.)

3. EXECUTE DBMS_STATS.GATHER_SCHEMA_STATS(‘USER’, estimate_percent=>10, cascade=>TRUE).  (Very slow on some tables.)

4. GATHER_TABLE_STATS.  (Very slow on some tables.)

 

The following statistics gathering method was found to have the best performance:

ANALYZE TABLE <Table Name> ESTIMATE STATISTICS SAMPLE 10 PERCENT;
ANALYZE INDEX <Index Name> VALIDATE STRUCTURE;

Performing this operation takes a few seconds per table/feature class.

Versioning after doing this procedure takes only a few minutes (compared to possibly many hours if this step is not performed).

 

Greg B.

One thought on “FAQ: Why is versioning a large feature dataset in Oracle slow?

Got something to say?

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s