What specifically does OracleBulkCopy do, and how can I optimize its performance?

A few more days of reading and experimentation and I was able to (mostly) answer a lot of these:

  1. I found this buried in the ODP.NET documentation (ironically not in the OracleBulkCopy docs):

    The ODP.NET Bulk Copy feature uses a direct path load approach, which is similar to, but not the same as Oracle SQL*Loader. Using direct path load is faster than conventional loading (using conventional SQL INSERT statements).

    So it appears that it does use direct path.

  2. This I was able to verify by doing a large bulk copy operation and getting the index properties from SQL Developer. The index did appear as UNUSABLE while the bulk copy was in progress. However, I've also discovered that OracleBulkCopy.WriteToServer will refuse to run if the index starts in an UNUSABLE state, so clearly there's more going on here, because if it were as simple as disabling and rebuilding the index then it shouldn't care about initial state.

  3. It does make a difference specifically if the index is also a constraint. Found this little gem in the documentation linked above:

    Enabled Constraints
    During an Oracle bulk copy, the following constraints are automatically enabled by default:

    • NOT NULL
    • UNIQUE
    • PRIMARY KEY (unique-constraints on not-null columns)

    NOT NULL constraints are checked at column array build time. Any row that violates the NOT NULL constraint is rejected.

    UNIQUE constraints are verified when indexes are rebuilt at the end of the load. The index is left in an Index Unusable state if it violates a UNIQUE constraint.

    The documentation is a little hazy on what happens during the load, especially with primary keys, but one thing is absolutely certain - it behaves differently with a primary key vs. without one. Since the OracleBulkCopy will happily allow you violate index constraints (and punt the index into UNUSABLE state when it's done), my hunch is that it's building the PK index during the bulk copy but simply not validating it until afterward.

  4. I'm not sure whether the difference observed is within Oracle itself or just a quirk of the OracleBulkCopy. The jury's still out on this one.

  5. OracleBulkCopy will throw an exception if an index is initially in the UNUSABLE state, so it's really a moot point.

  6. If there are other factors, indexes (and especially PK indexes) are still the most important, as I found out by:

  7. Creating a global temporary table with the same schema (using CREATE AS), then bulk copying into the temporary table, and finally doing a plain old INSERT from the temp table into the real table. Since the temp table has no index, the bulk copy happens very fast, and the final INSERT is also fast because the data is already in a table (I haven't tried the append hint yet, since a 5M row table-to-table copy already takes less than 1 minute).

    I'm not yet sure of the potential ramifications of (ab)using the temporary table space this way, but so far it hasn't given me any trouble, and it's much safer than the alternative by way of preventing corruption of either the rows or indexes.

    The success of this also pretty clearly demonstrates that the PK index is the problem, as that is the only practical difference between the temp table and the permanent table - both started with zero rows during the performance tests.

Conclusion: Don't bother trying to bulk copy more than around 100k rows into an indexed Oracle table using ODP.NET. Either drop the index (if you don't actually need it) or "preload" the data into a different (non-indexed) table.

Tags:

Oracle

Bulk