public interface TransactionController extends PersistentSet
Each transaction controller is associated with a transaction context which provides error cleanup when standard exceptions are thrown anywhere in the system. The transaction context performs the following actions in response to cleanupOnError:
Modifier and Type | Field and Description |
---|---|
static byte |
IS_DEFAULT |
static byte |
IS_KEPT |
static byte |
IS_TEMPORARY |
static int |
ISOLATION_NOLOCK
No locks are requested for data that is read only.
|
static int |
ISOLATION_READ_COMMITTED
No lost updates, no dirty reads, only committed data is returned.
|
static int |
ISOLATION_READ_COMMITTED_NOHOLDLOCK
No lost updates, no dirty reads, only committed data is returned.
|
static int |
ISOLATION_READ_UNCOMMITTED
No locks are requested for data that is read only.
|
static int |
ISOLATION_REPEATABLE_READ
Read and write locks are held until end of transaction, but no
phantom protection is performed (ie no previous key locking).
|
static int |
ISOLATION_SERIALIZABLE
Gray's isolation degree 3, "Serializable, Repeatable Read".
|
static int |
KEEP_LOCKS |
static int |
MODE_RECORD
Constant used for the lock_level argument to openConglomerate() and
openScan() calls.
|
static int |
MODE_TABLE
Constant used for the lock_level argument to openConglomerate() and
openScan() calls.
|
static int |
OPEN_CONGLOMERATE
Constants used for the countOpen() call.
|
static int |
OPEN_CREATED_SORTS |
static int |
OPEN_SCAN |
static int |
OPEN_SORT |
static int |
OPEN_TOTAL |
static int |
OPENMODE_BASEROW_INSERT_LOCKED
Use this mode to the openConglomerate() call used to open the
secondary indices of a table for inserting new rows in the table.
|
static int |
OPENMODE_FOR_LOCK_ONLY
Use this mode to the openConglomerate() call used to just get the
table lock on the conglomerate without actually doing anything else.
|
static int |
OPENMODE_FORUPDATE
open table for update, if not specified table will be opened for read.
|
static int |
OPENMODE_LOCK_NOWAIT
The table lock request will not wait.
|
static int |
OPENMODE_LOCK_ROW_NOWAIT
The row lock request will not wait.
|
static int |
OPENMODE_SECONDARY_LOCKED
Use this mode to the openConglomerate() call which opens the base
table to be used in a index to base row probe.
|
static int |
OPENMODE_USE_UPDATE_LOCKS
Use this mode to the openScan() call to indicate the scan should get
update locks during scan, and either promote the update locks to
exclusive locks if the row is changed or demote the lock if the row
is not updated.
|
static int |
READONLY_TRANSACTION_INITIALIZATION |
static int |
RELEASE_LOCKS |
Modifier and Type | Method and Description |
---|---|
void |
abort()
Abort all changes made by this transaction since the last commit, abort
or the point the transaction was started, whichever is the most recent.
|
void |
addColumnToConglomerate(long conglomId,
int column_id,
Storable template_column,
int collation_id)
Add a column to a conglomerate.
|
boolean |
anyoneBlocked()
Return true if any transaction is blocked (even if not by this one).
|
void |
commit()
Commit this transaction.
|
DatabaseInstant |
commitNoSync(int commitflag)
"Commit" this transaction without sync'ing the log.
|
void |
compressConglomerate(long conglomId)
Return free space from the conglomerate back to the OS.
|
boolean |
conglomerateExists(long conglomId)
Check whether a conglomerate exists.
|
int |
countOpens(int which_to_count)
Report on the number of open conglomerates in the transaction.
|
long |
createAndLoadConglomerate(java.lang.String implementation,
DataValueDescriptor[] template,
ColumnOrdering[] columnOrder,
int[] collationIds,
java.util.Properties properties,
int temporaryFlag,
RowLocationRetRowSource rowSource,
long[] rowCount)
Create a conglomerate and load (filled) it with rows that comes from the
row source without loggging.
|
BackingStoreHashtable |
createBackingStoreHashtableFromScan(long conglomId,
int open_mode,
int lock_level,
int isolation_level,
FormatableBitSet scanColumnList,
DataValueDescriptor[] startKeyValue,
int startSearchOperator,
Qualifier[][] qualifier,
DataValueDescriptor[] stopKeyValue,
int stopSearchOperator,
long max_rowcnt,
int[] key_column_numbers,
boolean remove_duplicates,
long estimated_rowcnt,
long max_inmemory_rowcnt,
int initialCapacity,
float loadFactor,
boolean collect_runtimestats,
boolean skipNullKeyColumns,
boolean keepAfterCommit,
boolean includeRowLocations)
Create a HashSet which contains all rows that qualify for the
described scan.
|
long |
createConglomerate(java.lang.String implementation,
DataValueDescriptor[] template,
ColumnOrdering[] columnOrder,
int[] collationIds,
java.util.Properties properties,
int temporaryFlag)
Create a conglomerate.
|
long |
createSort(java.util.Properties implParameters,
DataValueDescriptor[] template,
ColumnOrdering[] columnOrdering,
SortObserver sortObserver,
boolean alreadyInOrder,
long estimatedRows,
int estimatedRowSize)
Create a sort.
|
java.lang.Object |
createXATransactionFromLocalTransaction(int format_id,
byte[] global_id,
byte[] branch_id)
Convert a local transaction to a global transaction.
|
java.lang.String |
debugOpened()
Return a string with debug information about opened congloms/scans/sorts.
|
GroupFetchScanController |
defragmentConglomerate(long conglomId,
boolean online,
boolean hold,
int open_mode,
int lock_level,
int isolation_level)
Compress table in place.
|
void |
destroy()
Abort the current transaction and pop the context.
|
void |
dropConglomerate(long conglomId)
Drop a conglomerate.
|
void |
dropSort(long sortid)
Drop a sort.
|
boolean |
fetchMaxOnBtree(long conglomId,
int open_mode,
int lock_level,
int isolation_level,
FormatableBitSet scanColumnList,
DataValueDescriptor[] fetchRow)
Retrieve the maximum value row in an ordered conglomerate.
|
long |
findConglomid(long containerid)
For debugging, find the conglomid given the containerid.
|
long |
findContainerid(long conglomid)
For debugging, find the containerid given the conglomid.
|
AccessFactory |
getAccessManager()
Get reference to access factory which started this transaction.
|
java.lang.String |
getActiveStateTxIdString()
Get string id of the transaction that would be when the Transaction
is IN active state.
|
ContextManager |
getContextManager()
Get the context manager that the transaction was created with.
|
DynamicCompiledOpenConglomInfo |
getDynamicCompiledConglomInfo(long conglomId)
Return dynamic information about the conglomerate to be dynamically
reused in repeated execution of a statement.
|
FileResource |
getFileHandler()
Get an object to handle non-transactional files.
|
CompatibilitySpace |
getLockSpace()
Return an object that when used as the compatibility space for a lock
request, and the group object is the one returned by a
call to
getOwner() on that object, guarantees that the lock
will be removed on a commit or an abort. |
StaticCompiledOpenConglomInfo |
getStaticCompiledConglomInfo(long conglomId)
Return static information about the conglomerate to be included in a
a compiled plan.
|
java.lang.String |
getTransactionIdString()
Get string id of the transaction.
|
java.util.Properties |
getUserCreateConglomPropList()
A superset of properties that "users" can specify.
|
boolean |
isGlobal()
Reveals whether the transaction is a global or local transaction.
|
boolean |
isIdle()
Reveals whether the transaction has ever read or written data.
|
boolean |
isPristine()
Reveals whether the transaction is read only.
|
void |
logAndDo(Loggable operation)
Log an operation and then action it in the context of this
transaction.
|
ConglomerateController |
openCompiledConglomerate(boolean hold,
int open_mode,
int lock_level,
int isolation_level,
StaticCompiledOpenConglomInfo static_info,
DynamicCompiledOpenConglomInfo dynamic_info)
Open a conglomerate for use, optionally include "compiled" info.
|
ScanController |
openCompiledScan(boolean hold,
int open_mode,
int lock_level,
int isolation_level,
FormatableBitSet scanColumnList,
DataValueDescriptor[] startKeyValue,
int startSearchOperator,
Qualifier[][] qualifier,
DataValueDescriptor[] stopKeyValue,
int stopSearchOperator,
StaticCompiledOpenConglomInfo static_info,
DynamicCompiledOpenConglomInfo dynamic_info)
Open a scan on a conglomerate, optionally providing compiled info.
|
ConglomerateController |
openConglomerate(long conglomId,
boolean hold,
int open_mode,
int lock_level,
int isolation_level)
Open a conglomerate for use.
|
GroupFetchScanController |
openGroupFetchScan(long conglomId,
boolean hold,
int open_mode,
int lock_level,
int isolation_level,
FormatableBitSet scanColumnList,
DataValueDescriptor[] startKeyValue,
int startSearchOperator,
Qualifier[][] qualifier,
DataValueDescriptor[] stopKeyValue,
int stopSearchOperator)
Open a scan which gets copies of multiple rows at a time.
|
ScanController |
openScan(long conglomId,
boolean hold,
int open_mode,
int lock_level,
int isolation_level,
FormatableBitSet scanColumnList,
DataValueDescriptor[] startKeyValue,
int startSearchOperator,
Qualifier[][] qualifier,
DataValueDescriptor[] stopKeyValue,
int stopSearchOperator)
Open a scan on a conglomerate.
|
SortController |
openSort(long id)
Open a sort controller for a sort previously created in this
transaction.
|
SortCostController |
openSortCostController()
Return an open SortCostController.
|
RowLocationRetRowSource |
openSortRowSource(long id)
Open a scan for retrieving rows from a sort.
|
ScanController |
openSortScan(long id,
boolean hold)
Open a scan for retrieving rows from a sort.
|
StoreCostController |
openStoreCost(long conglomId)
Return an open StoreCostController for the given conglomid.
|
void |
purgeConglomerate(long conglomId)
Purge all committed deleted rows from the conglomerate.
|
long |
recreateAndLoadConglomerate(java.lang.String implementation,
boolean recreate_ifempty,
DataValueDescriptor[] template,
ColumnOrdering[] columnOrder,
int[] collationIds,
java.util.Properties properties,
int temporaryFlag,
long orig_conglomId,
RowLocationRetRowSource rowSource,
long[] rowCount)
Recreate a conglomerate and possibly load it with new rows that come from
the new row source.
|
int |
releaseSavePoint(java.lang.String name,
java.lang.Object kindOfSavepoint)
Release the save point of the given name.
|
int |
rollbackToSavePoint(java.lang.String name,
boolean close_controllers,
java.lang.Object kindOfSavepoint)
Rollback all changes made since the named savepoint was set.
|
void |
setNoLockWait(boolean noWait)
Tell this transaction whether it should time out immediately if a lock
cannot be granted without waiting.
|
int |
setSavePoint(java.lang.String name,
java.lang.Object kindOfSavepoint)
Set a save point in the current transaction.
|
TransactionController |
startNestedUserTransaction(boolean readOnly,
boolean flush_log_on_xact_end)
Get an nested user transaction.
|
getProperties, getProperty, getPropertyDefault, propertyDefaultIsVisible, setProperty, setPropertyDefault
static final int MODE_RECORD
static final int MODE_TABLE
static final int ISOLATION_NOLOCK
static final int ISOLATION_READ_UNCOMMITTED
static final int ISOLATION_READ_COMMITTED
static final int ISOLATION_READ_COMMITTED_NOHOLDLOCK
static final int ISOLATION_REPEATABLE_READ
static final int ISOLATION_SERIALIZABLE
static final int OPENMODE_USE_UPDATE_LOCKS
Note that one must still set OPENMODE_FORUPDATE to be able to change rows in the scan. So to enable update locks for an updating scan one provides (OPENMODE_FORUPDATE | OPENMODE_USE_UPDATE_LOCKS)
static final int OPENMODE_SECONDARY_LOCKED
static final int OPENMODE_BASEROW_INSERT_LOCKED
static final int OPENMODE_FORUPDATE
static final int OPENMODE_FOR_LOCK_ONLY
static final int OPENMODE_LOCK_NOWAIT
The request to get the table lock (any table lock including intent or "real" table level lock), will not wait if it can't be granted. A lock timeout will be returned. Note that subsequent row locks will wait if the application has not set a 0 timeout and if the call does not have a wait parameter (like OpenConglomerate.fetch().
static final int OPENMODE_LOCK_ROW_NOWAIT
The request to get the row lock (any row lock including intent or "real" row level lock), will not wait if it can't be granted. A lock timeout will be returned.
static final int OPEN_CONGLOMERATE
static final int OPEN_SCAN
static final int OPEN_CREATED_SORTS
static final int OPEN_SORT
static final int OPEN_TOTAL
static final byte IS_DEFAULT
static final byte IS_TEMPORARY
static final byte IS_KEPT
static final int RELEASE_LOCKS
static final int KEEP_LOCKS
static final int READONLY_TRANSACTION_INITIALIZATION
AccessFactory getAccessManager()
boolean conglomerateExists(long conglomId) throws StandardException
conglomId
- The identifier of the conglomerate to check for.StandardException
- only thrown if something goes
wrong in the lower levels.long createConglomerate(java.lang.String implementation, DataValueDescriptor[] template, ColumnOrdering[] columnOrder, int[] collationIds, java.util.Properties properties, int temporaryFlag) throws StandardException
Currently, only "heap"'s and ""btree secondary index"'s are supported, and all the features are not completely implemented. For now, create conglomerates like this:
Each implementation of a conglomerate takes a possibly different set of properties. The "heap" implementation currently takes no properties. The "btree secondary index" requires the following set of properties:TransactionController tc; long conglomId = tc.createConglomerate( "heap", // we're requesting a heap conglomerate template, // a populated template is required for heap and btree. null, // no column order null, // default collation order for all columns null, // default properties 0); // not temporary
implementation
- Specifies what kind of conglomerate to create.
THE WAY THAT THE IMPLEMENTATION IS CHOSEN STILL NEEDS SOME WORK.
For now, use "BTREE" or "heap" for a local access manager.template
- A row which describes the prototypical
row that the conglomerate will be holding.
Typically this row gives the conglomerate
information about the number and type of
columns it will be holding. The implementation
may require a specific subclass of row type.
Note that the createConglomerate call reads the template and makes a copy
of any necessary information from the template, no reference to the
template is kept (and thus this template can be re-used in subsequent
calls - such as openScan()). This field is required when creating either
a heap or btree conglomerate.columnOrder
- Specifies the colummns sort order.
Useful only when the conglomerate is of type BTREE, default
value is 'null', which means all columns needs to be sorted in
Ascending order.collationIds
- Specifies the collation id of each of the columns
in the new conglomerate. Collation id along with format id may be used
to create DataValueDescriptor's which may subsequently be used for
comparisons. For instance the correct collation specific order and
searching is maintained by correctly specifying the collation id of
the columns in the index when the index is created.properties
- Implementation-specific properties of the
conglomerate.temporaryFlag
- Where temporaryFlag can have the following values:
IS_DEFAULT - no bit is set.
IS_TEMPORARY - if set, the conglomerate is temporary
IS_KEPT - only looked at if IS_TEMPORARY,
if set, the temporary container is not
removed automatically by store when
transaction terminates.
If IS_TEMPORARY is set, the conglomerate is temporary.
Temporary conglomerates are only visible through the transaction
controller that created them. Otherwise, they are opened,
scanned, and dropped in the same way as permanent conglomerates.
Changes to temporary conglomerates persist across commits, but
temporary conglomerates are truncated on abort (or rollback
to savepoint). Updates to temporary conglomerates are not
locked or logged.
A temporary conglomerate is only visible to the transaction
controller that created it, even if the conglomerate IS_KEPT
when the transaction termination.
All temporary conglomerate is removed by store when the
conglomerate controller is destroyed, or if it is dropped by an explicit
dropConglomerate. If Derby reboots, all temporary
conglomerates are removed.StandardException
- if the conglomerate could
not be created for some reason.long createAndLoadConglomerate(java.lang.String implementation, DataValueDescriptor[] template, ColumnOrdering[] columnOrder, int[] collationIds, java.util.Properties properties, int temporaryFlag, RowLocationRetRowSource rowSource, long[] rowCount) throws StandardException
Individual rows that are loaded into the conglomerate are not logged. After this operation, the underlying database must be backed up with a database backup rather than an transaction log backup (when we have them). This warning is put here for the benefit of future generation.
This function behaves the same as @see createConglomerate except it also populates the conglomerate with rows from the row source and the rows that are inserted are not logged.
implementation
- Specifies what kind of conglomerate to create.
THE WAY THAT THE IMPLEMENTATION IS CHOSEN STILL NEEDS SOME WORK.
For now, use "BTREE" or "heap" for a local access manager.template
- A row which describes the prototypical
row that the conglomerate will be holding.
Typically this row gives the conglomerate
information about the number and type of
columns it will be holding. The implementation
may require a specific subclass of row type.
Note that the createConglomerate call reads the template and makes a copy
of any necessary information from the template, no reference to the
template is kept (and thus this template can be re-used in subsequent
calls - such as openScan()). This field is required when creating either
a heap or btree conglomerate.columnOrder
- Specifies the colummns sort order.
Useful only when the conglomerate is of type BTREE, default
value is 'null', which means all columns needs to be sorted in
Ascending order.collationIds
- Specifies the collation id of each of the columns
in the new conglomerate. Collation id along with format id may be used
to create DataValueDescriptor's which may subsequently be used for
comparisons. For instance the correct collation specific order and
searching is maintained by correctly specifying the collation id of
the columns in the index when the index is created.properties
- Implementation-specific properties of the
conglomerate.rowSource
- the interface to recieve rows to load into the
conglomerate.rowCount
- - if not null the number of rows loaded into the table
will be returned as the first element of the array.StandardException
- if the conglomerate could not be created or
loaded for some reason. Throws
SQLState.STORE_CONGLOMERATE_DUPLICATE_KEY_EXCEPTION if
the conglomerate supports uniqueness checks and has been created to
disallow duplicates, and one of the rows being loaded had key columns which
were duplicate of a row already in the conglomerate.long recreateAndLoadConglomerate(java.lang.String implementation, boolean recreate_ifempty, DataValueDescriptor[] template, ColumnOrdering[] columnOrder, int[] collationIds, java.util.Properties properties, int temporaryFlag, long orig_conglomId, RowLocationRetRowSource rowSource, long[] rowCount) throws StandardException
This function behaves the same as @see createConglomerate except it also populates the conglomerate with rows from the row source and the rows that are inserted are not logged.
Individual rows that are loaded into the conglomerate are not logged. After this operation, the underlying database must be backed up with a database backup rather than an transaction log backup (when we have them). This warning is put here for the benefit of future generation.
implementation
- Specifies what kind of conglomerate to create.
THE WAY THAT THE IMPLEMENTATION IS CHOSEN STILL NEEDS SOME WORK.
For now, use "BTREE" or "heap" for a local access manager.recreate_ifempty
- If false, and the rowsource used to load the new
conglomerate returns no rows, then the original
conglomid will be returned. To the client it will
be as if no call was made. Underlying
implementations may actually create and drop a
container.
If true, then a new empty container will be
created and it's conglomid will be returned.template
- A row which describes the prototypical
row that the conglomerate will be holding.
Typically this row gives the conglomerate
information about the number and type of
columns it will be holding. The implementation
may require a specific subclass of row type.
Note that the createConglomerate call reads the template and makes a copy
of any necessary information from the template, no reference to the
template is kept (and thus this template can be re-used in subsequent
calls - such as openScan()). This field is required when creating either
a heap or btree conglomerate.columnOrder
- Specifies the colummns sort order.
Useful only when the conglomerate is of type BTREE, default
value is 'null', which means all columns needs to be sorted in
Ascending order.collationIds
- Specifies the collation id of each of the columns
in the new conglomerate. Collation id along with format id may be used
to create DataValueDescriptor's which may subsequently be used for
comparisons. For instance the correct collation specific order and
searching is maintained by correctly specifying the collation id of
the columns in the index when the index is created.properties
- Implementation-specific properties of the conglomerate.temporaryFlag
- If true, the conglomerate is temporary.
Temporary conglomerates are only visible through the transaction
controller that created them. Otherwise, they are opened,
scanned, and dropped in the same way as permanent conglomerates.
Changes to temporary conglomerates persist across commits, but
temporary conglomerates are truncated on abort (or rollback
to savepoint). Updates to temporary conglomerates are not
locked or logged.orig_conglomId
- The conglomid of the original conglomerate.rowSource
- interface to receive rows to load into the conglomerate.rowCount
- - if not null the number of rows loaded into the table
will be returned as the first element of the array.StandardException
- if the conglomerate could not be created or
loaded for some reason. Throws
SQLState.STORE_CONGLOMERATE_DUPLICATE_KEY_EXCEPTION if
the conglomerate supports uniqueness checks and has been created to
disallow duplicates, and one of the rows being loaded had key columns which
were duplicate of a row already in the conglomerate.void addColumnToConglomerate(long conglomId, int column_id, Storable template_column, int collation_id) throws StandardException
conglomId
- The identifier of the conglomerate to alter.column_id
- The column number to add this column at.template_column
- An instance of the column to be added to table.collation_id
- Collation id of the added column.StandardException
- Only some types of conglomerates can support
adding a column, for instance "heap" conglomerates support adding a
column while "btree" conglomerates do not. If the column can not be
added an exception will be thrown.void dropConglomerate(long conglomId) throws StandardException
conglomId
- The identifier of the conglomerate to drop.StandardException
- if the conglomerate could not be
dropped for some reason.long findConglomid(long containerid) throws StandardException
StandardException
- Standard exception policy.long findContainerid(long conglomid) throws StandardException
Will have to change if we ever have more than one container in a conglomerate.
StandardException
- Standard exception policy.TransactionController startNestedUserTransaction(boolean readOnly, boolean flush_log_on_xact_end) throws StandardException
A nested user transaction can be used exactly as any other TransactionController, except as follows. For this discussion let the parent transaction be the transaction used to make the startNestedUserTransaction() call, and let the child transaction be the transaction returned by the startNestedUserTransaction() call.
A parent transaction can nest a single readonly transaction and a single separate read/write transaction. If a subsequent nested transaction creation is attempted against the parent prior to destroying an existing nested user transaction of the same type, an exception will be thrown.
The nesting is limited to one level deep. An exception will be thrown if a subsequent getNestedUserTransaction() is called on the child transaction.
The locks in the child transaction of a readOnly nested user transaction will be compatible with the locks of the parent transaction. The locks in the child transaction of a non-readOnly nested user transaction will NOT be compatible with those of the parent transaction - this is necessary for correct recovery behavior.
A commit in the child transaction will release locks associated with the child transaction only, work can continue in the parent transaction at this point.
Any abort of the child transaction will result in an abort of both the child transaction and parent transaction, either initiated by an explict abort() call or by an exception that results in an abort.
A TransactionController.destroy() call should be made on the child transaction once all child work is done, and the caller wishes to continue work in the parent transaction.
AccessFactory.getTransaction() will always return the "parent" transaction, never the child transaction. Thus clients using nested user transactions must keep track of the transaction, as there is no interface to query the storage system to get the current child transaction. The idea is that a nested user transaction should be used to for a limited amount of work, committed, and then work continues in the parent transaction.
Nested User transactions are meant to be used to implement system work necessary to commit as part of implementing a user's request, but where holding the lock for the duration of the user transaction is not acceptable. 2 examples of this are system catalog read locks accumulated while compiling a plan, and auto-increment.
Once the first write of a non-readOnly nested transaction is done, then the nested user transaction must be committed or aborted before any write operation is attempted in the parent transaction.
fix for DERBY-5493 introduced a behavior change for commits executed against an updatable nested user transaction. Prior to this change commits would execute a "lazy" commit where commit log record would only be written to the stream, not guaranteed to disk. After this change commits on these transactions will always be forced to disk. To get the previous behavior one must call commitNoSync() instead.
examples of current usage of nested updatable transactions in Derby include: o recompile and saving of stored prepared statements, changed with DERBY-5493 to do synchronous commit. Code in SPSDescriptor.java. o sequence updater reserves new "range" of values in sequence catalog, changed with DERBY-5493 to do synchronous commit. Without this change crash of system might lose the updat of the range and then return same value on reboot. Code in SequenceUpdater.java o in place compress defragment phase committing units of work in moving tuples around in heap and indexes. changed with DERBY-5493 to do synchronous commit. code in AlterTableConstantAction.java. o used for creation of users initial default schema in SYSSCHEMAS. moving tuples around in heap and indexes. changed with DERBY-5493 to do synchronous commit. code in DDLConstantAction.java. o autoincrement/generated key case. Kept behavior previous to DERBY-5493 by changing to use commitNoSync, and defaulting flush_log_on_xact_end to false. Changing every key allocation to be a synchronous commit would be a huge performance problem for existing applications depending on current performance. code in InsertResultSet.java
readOnly
- Is transaction readonly? Only 1 non-read
only nested transaction is allowed per
transaction.flush_log_on_xact_end
- By default should the transaction commit
and abort be synced to the log. Normal
usage should pick true, unless there is
specific performance need and usage
works correctly if a commit can be lost
on system crash.StandardException
- Standard exception policy.java.util.Properties getUserCreateConglomPropList()
A superset of properties that "users" (ie. from sql) can specify. Store may implement other properties which should not be specified by users. Layers above access may implement properties which are not known at all to Access.
This list is a superset, as some properties may not be implemented by certain types of conglomerates. For instant an in-memory store may not implement a pageSize property. Or some conglomerates may not support pre-allocation.
This interface is meant to be used by the SQL parser to do validation of properties passsed to the create table statement, and also by the various user interfaces which present table information back to the user.
Currently this routine returns the following list: derby.storage.initialPages derby.storage.minimumRecordSize derby.storage.pageReservedSpace derby.storage.pageSize
ConglomerateController openConglomerate(long conglomId, boolean hold, int open_mode, int lock_level, int isolation_level) throws StandardException
The lock level indicates the minimum lock level to get locks at, the underlying conglomerate implementation may actually lock at a higher level (ie. caller may request MODE_RECORD, but the table may be locked at MODE_TABLE instead).
The close method is on the ConglomerateController interface.
conglomId
- The identifier of the conglomerate to open.hold
- If true, will be maintained open over commits.open_mode
- Specifiy flags to control opening of table.
OPENMODE_FORUPDATE - if set open the table for
update otherwise open table shared.lock_level
- One of (MODE_TABLE, MODE_RECORD).isolation_level
- The isolation level to lock the conglomerate at.
One of (ISOLATION_READ_COMMITTED,
ISOLATION_REPEATABLE_READ or
ISOLATION_SERIALIZABLE).StandardException
- if the conglomerate could not be opened
for some reason. Throws
SQLState.STORE_CONGLOMERATE_DOES_NOT_EXIST
if the conglomId being requested does not
exist for some reason (ie. someone has
dropped it).ConglomerateController openCompiledConglomerate(boolean hold, int open_mode, int lock_level, int isolation_level, StaticCompiledOpenConglomInfo static_info, DynamicCompiledOpenConglomInfo dynamic_info) throws StandardException
Same as openConglomerate(), except that one can optionally provide "compiled" static_info and/or dynamic_info. This compiled information must have be gotten from getDynamicCompiledConglomInfo() and/or getStaticCompiledConglomInfo() calls on the same conglomid being opened. It is up to caller that "compiled" information is still valid and is appropriately multi-threaded protected.
hold
- If true, will be maintained open over commits.open_mode
- Specifiy flags to control opening of table.lock_level
- One of (MODE_TABLE, MODE_RECORD).isolation_level
- The isolation level to lock the conglomerate at.
One of (ISOLATION_READ_COMMITTED,
ISOLATION_REPEATABLE_READ or
ISOLATION_SERIALIZABLE).static_info
- object returned from
getStaticCompiledConglomInfo() call on this id.dynamic_info
- object returned from
getDynamicCompiledConglomInfo() call on this id.StandardException
- Standard exception policy.openConglomerate(long, boolean, int, int, int)
,
getDynamicCompiledConglomInfo(long)
,
getStaticCompiledConglomInfo(long)
,
DynamicCompiledOpenConglomInfo
,
StaticCompiledOpenConglomInfo
BackingStoreHashtable createBackingStoreHashtableFromScan(long conglomId, int open_mode, int lock_level, int isolation_level, FormatableBitSet scanColumnList, DataValueDescriptor[] startKeyValue, int startSearchOperator, Qualifier[][] qualifier, DataValueDescriptor[] stopKeyValue, int stopSearchOperator, long max_rowcnt, int[] key_column_numbers, boolean remove_duplicates, long estimated_rowcnt, long max_inmemory_rowcnt, int initialCapacity, float loadFactor, boolean collect_runtimestats, boolean skipNullKeyColumns, boolean keepAfterCommit, boolean includeRowLocations) throws StandardException
All parameters shared between openScan() and this routine are interpreted exactly the same. Logically this routine calls openScan() with the passed in set of parameters, and then places all returned rows into a newly created HashSet and returns, actual implementations will likely perform better than actually calling openScan() and doing this. For documentation of the openScan parameters see openScan().
conglomId
- see openScan()open_mode
- see openScan()lock_level
- see openScan()isolation_level
- see openScan()scanColumnList
- see openScan()startKeyValue
- see openScan()startSearchOperator
- see openScan()qualifier
- see openScan()stopKeyValue
- see openScan()stopSearchOperator
- see openScan()max_rowcnt
- The maximum number of rows to insert into
the HashSet. Pass in -1 if there is no
maximum.key_column_numbers
- The column numbers of the columns in the
scan result row to be the key to the
Hashtable. "0" is the first column in the
scan result row (which may be different
than the first row in the table of the
scan).remove_duplicates
- Should the HashSet automatically remove
duplicates, or should it create the Vector
of duplicates?estimated_rowcnt
- The number of rows that the caller
estimates will be inserted into the sort.
-1 indicates that the caller has no idea.
Used by the sort to make good choices about
in-memory vs. external sorting, and to size
merge runs.max_inmemory_rowcnt
- The number of rows at which the underlying
Hashtable implementation should cut over
from an in-memory hash to a disk based
access method.initialCapacity
- If not "-1" used to initialize the java
Hashtable.loadFactor
- If not "-1" used to initialize the java
Hashtable.collect_runtimestats
- If true will collect up runtime stats during
scan processing for retrieval by
BackingStoreHashtable.getRuntimeStats().skipNullKeyColumns
- Whether or not to skip rows with 1 or more null key columnskeepAfterCommit
- If true then keep hash table after commitincludeRowLocations
- If true then rows should include RowLocations.StandardException
- Standard exception policy.BackingStoreHashtable
,
openScan(long, boolean, int, int, int, org.apache.derby.iapi.services.io.FormatableBitSet, org.apache.derby.iapi.types.DataValueDescriptor[], int, org.apache.derby.iapi.store.access.Qualifier[][], org.apache.derby.iapi.types.DataValueDescriptor[], int)
ScanController openScan(long conglomId, boolean hold, int open_mode, int lock_level, int isolation_level, FormatableBitSet scanColumnList, DataValueDescriptor[] startKeyValue, int startSearchOperator, Qualifier[][] qualifier, DataValueDescriptor[] stopKeyValue, int stopSearchOperator) throws StandardException
The way that starting and stopping keys and operators are used may best be described by example. Say there's an ordered conglomerate with two columns, where the 0-th column is named 'x', and the 1st column is named 'y'. The values of the columns are as follows:
x: 1 3 4 4 4 5 5 5 6 7 9 y: 1 1 2 4 6 2 4 6 1 1 1
A {start key, search op} pair of {{5.2}, GE} would position on {x=5, y=2}, whereas the pair {{5}, GT} would position on {x=6, y=1}.
Partial keys are used to implement partial key scans in SQL. For example, the SQL "select * from t where x = 5" would open a scan on the conglomerate (or a useful index) of t using a starting position partial key of {{5}, GE} and a stopping position partial key of {{5}, GT}.
Some more examples:
+-------------------+------------+-----------+--------------+--------------+ | predicate | start key | stop key | rows | rows locked | | | value | op | value |op | returned |serialization | +-------------------+-------+----+-------+---+--------------+--------------+ | x = 5 | {5} | GE | {5} |GT |{5,2} .. {5,6}|{4,6} .. {5,6}| | x > 5 | {5} | GT | null | |{6,1} .. {9,1}|{5,6} .. {9,1}| | x >= 5 | {5} | GE | null | |{5,2} .. {9,1}|{4,6} .. {9,1}| | x <= 5 | null | | {5} |GT |{1,1} .. {5,6}|first .. {5,6}| | x < 5 | null | | {5} |GE |{1,1} .. {4,6}|first .. {4,6}| | x >= 5 and x <= 7 | {5}, | GE | {7} |GT |{5,2} .. {7,1}|{4,6} .. {7,1}| | x = 5 and y > 2 | {5,2} | GT | {5} |GT |{5,4} .. {5,6}|{5,2} .. {5,6}| | x = 5 and y >= 2 | {5,2} | GE | {5} |GT |{5,2} .. {5,6}|{4,6} .. {5,6}| | x = 5 and y < 5 | {5} | GE | {5,5} |GE |{5,2} .. {5,4}|{4,6} .. {5,4}| | x = 2 | {2} | GE | {2} |GT | none |{1,1} .. {1,1}| +-------------------+-------+----+-------+---+--------------+--------------+
As the above table implies, the underlying scan may lock more rows than it returns in order to guarantee serialization.
For each row which meets the start and stop position, as described above the row is "qualified" to see whether it should be returned. The qualification is a 2 dimensional array of @see Qualifiers, which represents the qualification in conjunctive normal form (CNF). Conjunctive normal form is an "and'd" set of "or'd" Qualifiers.
For example x = 5 would be represented is pseudo code as: qualifier_cnf[][] = new Qualifier[1]; qualifier_cnf[0] = new Qualifier[1]; qualifier_cnr[0][0] = new Qualifer(x = 5)
For example (x = 5) or (y = 6) would be represented is pseudo code as: qualifier_cnf[][] = new Qualifier[1]; qualifier_cnf[0] = new Qualifier[2]; qualifier_cnr[0][0] = new Qualifer(x = 5) qualifier_cnr[0][1] = new Qualifer(y = 6)
For example ((x = 5) or (x = 6)) and ((y = 1) or (y = 2)) would be represented is pseudo code as: qualifier_cnf[][] = new Qualifier[2]; qualifier_cnf[0] = new Qualifier[2]; qualifier_cnr[0][0] = new Qualifer(x = 5) qualifier_cnr[0][1] = new Qualifer(x = 6) qualifier_cnr[0][0] = new Qualifer(y = 5) qualifier_cnr[0][1] = new Qualifer(y = 6)
For each row the CNF qualfier is processed and it is determined whether or not the row should be returned to the caller. The following pseudo-code describes how this is done:
}if (qualifier != null) {}for (int and_clause; and_clause < qualifier.length; and_clause++) { boolean or_qualifies = false; for (int or_clause; or_clause < qualifier[and_clause].length; or_clause++) {}DataValueDescriptor key = qualifier[and_clause][or_clause].getOrderable(); DataValueDescriptor row_col = get row column[qualifier[and_clause][or_clause].getColumnId()]; boolean or_qualifies = row_col.compare(qualifier[i].getOperator,if (or_qualifies) { break; } } if (!or_qualifies) {key, qualifier[i].getOrderedNulls, qualifier[i].getUnknownRV);don't return this row to the client - proceed to next row;
conglomId
- The identifier of the conglomerate
to open the scan for.hold
- If true, this scan will be maintained open over
commits.open_mode
- Specifiy flags to control opening of table.
OPENMODE_FORUPDATE - if set open the table for
update otherwise open table shared.lock_level
- One of (MODE_TABLE, MODE_RECORD).isolation_level
- The isolation level to lock the conglomerate at.
One of (ISOLATION_READ_COMMITTED,
ISOLATION_REPEATABLE_READ or
ISOLATION_SERIALIZABLE).scanColumnList
- A description of which columns to return from
every fetch in the scan. template, and scanColumnList
work together to describe the row to be returned by the scan - see RowUtil
for description of how these three parameters work together to describe
a "row".startKeyValue
- An indexable row which holds a
(partial) key value which, in combination with the
startSearchOperator, defines the starting position of
the scan. If null, the starting position of the scan
is the first row of the conglomerate.
The startKeyValue must only reference columns included
in the scanColumnList.startSearchOperator
- an operator which defines
how the startKeyValue is to be searched for. If
startSearchOperation is ScanController.GE, the scan starts on
the first row which is greater than or equal to the
startKeyValue. If startSearchOperation is ScanController.GT,
the scan starts on the first row whose key is greater than
startKeyValue. The startSearchOperation parameter is
ignored if the startKeyValue parameter is null.qualifier
- A 2 dimensional array encoding a conjunctive normal
form (CNF) datastructure of of qualifiers which, applied
to each key, restrict the rows returned by the scan. Rows
for which the CNF expression returns false are not
returned by the scan. If null, all rows are returned.
Qualifiers can only reference columns which are included in the
scanColumnList. The column id that a qualifier returns is the
column id the table, not the column id in the partial row being
returned.
For detailed description of 2-dimensional array passing @see QualifierstopKeyValue
- An indexable row which holds a
(partial) key value which, in combination with the
stopSearchOperator, defines the ending position of
the scan. If null, the ending position of the scan
is the last row of the conglomerate.
The stopKeyValue must only reference columns included
in the scanColumnList.stopSearchOperator
- an operator which defines
how the stopKeyValue is used to determine the scan stopping
position. If stopSearchOperation is ScanController.GE, the scan
stops just before the first row which is greater than or
equal to the stopKeyValue. If stopSearchOperation is
ScanController.GT, the scan stops just before the first row whose
key is greater than startKeyValue. The stopSearchOperation
parameter is ignored if the stopKeyValue parameter is null.StandardException
- if the scan could not be
opened for some reason. Throws SQLState.STORE_CONGLOMERATE_DOES_NOT_EXIST
if the conglomId being requested does not exist for some reason (ie.
someone has dropped it).RowUtil
,
ScanController
ScanController openCompiledScan(boolean hold, int open_mode, int lock_level, int isolation_level, FormatableBitSet scanColumnList, DataValueDescriptor[] startKeyValue, int startSearchOperator, Qualifier[][] qualifier, DataValueDescriptor[] stopKeyValue, int stopSearchOperator, StaticCompiledOpenConglomInfo static_info, DynamicCompiledOpenConglomInfo dynamic_info) throws StandardException
Same as openScan(), except that one can optionally provide "compiled" static_info and/or dynamic_info. This compiled information must have be gotten from getDynamicCompiledConglomInfo() and/or getStaticCompiledConglomInfo() calls on the same conglomid being opened. It is up to caller that "compiled" information is still valid and is appropriately multi-threaded protected.
open_mode
- see openScan()lock_level
- see openScan()isolation_level
- see openScan()scanColumnList
- see openScan()startKeyValue
- see openScan()startSearchOperator
- see openScan()qualifier
- see openScan()stopKeyValue
- see openScan()stopSearchOperator
- see openScan()static_info
- object returned from
getStaticCompiledConglomInfo() call on this id.dynamic_info
- object returned from
getDynamicCompiledConglomInfo() call on this id.StandardException
- Standard exception policy.openScan(long, boolean, int, int, int, org.apache.derby.iapi.services.io.FormatableBitSet, org.apache.derby.iapi.types.DataValueDescriptor[], int, org.apache.derby.iapi.store.access.Qualifier[][], org.apache.derby.iapi.types.DataValueDescriptor[], int)
,
getDynamicCompiledConglomInfo(long)
,
getStaticCompiledConglomInfo(long)
,
DynamicCompiledOpenConglomInfo
,
StaticCompiledOpenConglomInfo
GroupFetchScanController openGroupFetchScan(long conglomId, boolean hold, int open_mode, int lock_level, int isolation_level, FormatableBitSet scanColumnList, DataValueDescriptor[] startKeyValue, int startSearchOperator, Qualifier[][] qualifier, DataValueDescriptor[] stopKeyValue, int stopSearchOperator) throws StandardException
All inputs work exactly as in openScan(). The return is a GroupFetchScanController, which only allows fetches of groups of rows from the conglomerate.
conglomId
- see openScan()open_mode
- see openScan()lock_level
- see openScan()isolation_level
- see openScan()scanColumnList
- see openScan()startKeyValue
- see openScan()startSearchOperator
- see openScan()qualifier
- see openScan()stopKeyValue
- see openScan()stopSearchOperator
- see openScan()StandardException
- Standard exception policy.ScanController
,
GroupFetchScanController
GroupFetchScanController defragmentConglomerate(long conglomId, boolean online, boolean hold, int open_mode, int lock_level, int isolation_level) throws StandardException
Returns a GroupFetchScanController which can be used to move rows around in a table, creating a block of free pages at the end of the table. The process will move rows from the end of the table toward the beginning. The GroupFetchScanController will return the old row location, the new row location, and the actual data of any row moved. Note that this scan only returns moved rows, not an entire set of rows, the scan is designed specifically to be used by either explicit user call of the SYSCS_ONLINE_COMPRESS_TABLE() procedure, or internal background calls to compress the table. The old and new row locations are returned so that the caller can update any indexes necessary. This scan always returns all collumns of the row. All inputs work exactly as in openScan(). The return is a GroupFetchScanController, which only allows fetches of groups of rows from the conglomerate.
conglomId
- see openScan()hold
- see openScan()open_mode
- see openScan()lock_level
- see openScan()isolation_level
- see openScan()StandardException
- Standard exception policy.ScanController
,
GroupFetchScanController
void purgeConglomerate(long conglomId) throws StandardException
This call will purge committed deleted rows from the conglomerate, that space will be available for future inserts into the conglomerate.
conglomId
- Id of the conglomerate to purge.StandardException
- Standard exception policy.void compressConglomerate(long conglomId) throws StandardException
Returns free space from the conglomerate back to the OS. Currently only the sequential free pages at the "end" of the conglomerate can be returned to the OS.
conglomId
- Id of the conglomerate to purge.StandardException
- Standard exception policy.boolean fetchMaxOnBtree(long conglomId, int open_mode, int lock_level, int isolation_level, FormatableBitSet scanColumnList, DataValueDescriptor[] fetchRow) throws StandardException
Returns true and fetches the rightmost non-null row of an ordered conglomerate into "fetchRow" if there is at least one non-null row in the conglomerate. If there are no non-null rows in the conglomerate it returns false. Any row with a first column with a Null is considered a "null" row.
Non-ordered conglomerates will not implement this interface, calls will generate a StandardException.
RESOLVE - this interface is temporary, long term equivalent (and more) functionality will be provided by the openBackwardScan() interface.
ISOLATION_SERIALIZABLE and MODE_RECORD locking for btree max: The "BTREE" implementation will at the very least get a shared row lock on the max key row and the key previous to the max. This will be the case where the max row exists in the rightmost page of the btree. These locks won't be released. If the row does not exist in the last page of the btree then a scan of the entire btree will be performed, locks acquired in this scan will not be released.
Note that under ISOLATION_READ_COMMITTED, all locks on the table are released before returning from this call.
conglomId
- The identifier of the conglomerate
to open the scan for.open_mode
- Specifiy flags to control opening of table.
OPENMODE_FORUPDATE - if set open the table for
update otherwise open table shared.lock_level
- One of (MODE_TABLE, MODE_RECORD).isolation_level
- The isolation level to lock the conglomerate at.
One of (ISOLATION_READ_COMMITTED,
ISOLATION_REPEATABLE_READ or
ISOLATION_SERIALIZABLE).scanColumnList
- A description of which columns to return from
every fetch in the scan. template, and
scanColumnList work together
to describe the row to be returned by the scan -
see RowUtil for description of how these three
parameters work together to describe a "row".fetchRow
- The row to retrieve the maximum value into.StandardException
- Standard exception policy.StoreCostController openStoreCost(long conglomId) throws StandardException
Return an open StoreCostController which can be used to ask about the estimated row counts and costs of ScanController and ConglomerateController operations, on the given conglomerate.
conglomId
- The identifier of the conglomerate to open.StandardException
- Standard exception policy.StoreCostController
int countOpens(int which_to_count) throws StandardException
There are 4 types of open "conglomerates" that can be tracked, those opened by each of the following: openConglomerate(), openScan(), createSort(), and openSort(). Scans opened by openSortScan() are tracked the same as those opened by openScan(). This routine can be used to either report on the number of all opens, or may be used to track one particular type of open.
This routine is expected to be used for debugging only. An implementation may only track this info under SanityManager.DEBUG mode. If the implementation does not track the info it will return -1 (so code using this call to verify that no congloms are open should check for return <= 0 rather than == 0).
The return value depends on the "which_to_count" parameter as follows:
which_to_count
- Which kind of open to report on.StandardException
- Standard exception policy.java.lang.String debugOpened() throws StandardException
Return a string with debugging information about current opened congloms/scans/sorts which have not been close()'d. Calls to this routine are only valid under code which is conditional on SanityManager.DEBUG.
StandardException
- Standard exception policy.FileResource getFileHandler()
CompatibilitySpace getLockSpace()
getOwner()
on that object, guarantees that the lock
will be removed on a commit or an abort.void setNoLockWait(boolean noWait)
noWait
- if true
never wait for a lock in this transaction,
but time out immediatelyLockOwner.noWait()
,
Transaction.setNoLockWait(boolean)
StaticCompiledOpenConglomInfo getStaticCompiledConglomInfo(long conglomId) throws StandardException
The static info would be valid until any ddl was executed on the conglomid, and would be up to the caller to throw away when that happened. This ties in with what language already does for other invalidation of static info. The type of info in this would be containerid and array of format id's from which templates can be created. The info in this object is read only and can be shared among as many threads as necessary.
conglomId
- The identifier of the conglomerate to open.StandardException
- Standard exception policy.DynamicCompiledOpenConglomInfo getDynamicCompiledConglomInfo(long conglomId) throws StandardException
The dynamic info is a set of variables to be used in a given ScanController or ConglomerateController. It can only be used in one controller at a time. It is up to the caller to insure the correct thread access to this info. The type of info in this is a scratch template for btree traversal, other scratch variables for qualifier evaluation, ...
conglomId
- The identifier of the conglomerate to open.StandardException
- Standard exception policy.void logAndDo(Loggable operation) throws StandardException
This simply passes the operation to the RawStore which logs and does it.
operation
- the operation that is to be appliedStandardException
- Standard Derby exception policyLoggable
,
Transaction.logAndDo(org.apache.derby.iapi.store.raw.Loggable)
long createSort(java.util.Properties implParameters, DataValueDescriptor[] template, ColumnOrdering[] columnOrdering, SortObserver sortObserver, boolean alreadyInOrder, long estimatedRows, int estimatedRowSize) throws StandardException
Sorts also do aggregation. The input (unaggregated) rows have the same format as the aggregated rows, and the aggregate results are part of the both rows. The sorter, when it notices that a row is a duplicate of another, calls a user-supplied aggregation method (see interface Aggregator), passing it both rows. One row is known as the 'addend' and the other the 'accumulator'. The aggregation method is assumed to merge the addend into the accumulator. The sort then discards the addend row.
So, for the query:
The input row to the sorter would have one column for a and another column for sum(b). It is up to the caller to get the format of the row correct, and to initialize the aggregate values correctly (null for most aggregates, 0 for count).select a, sum(b) from t group by a
Nulls are always considered to be ordered in a sort, that is, null compares equal to null, and less than anything else.
implParameters
- Properties which help in choosing
implementation-specific sort options. If null, a
"generally useful" sort will be used.template
- A row which is prototypical for the sort.
All rows inserted into the sort controller must have
exactly the same number of columns as the template row.
Every column in an inserted row must have the same type
as the corresponding column in the template.columnOrdering
- An array which specifies which columns
participate in ordering - see interface ColumnOrdering for
details. The column referenced in the 0th columnOrdering
object is compared first, then the 1st, etc. To sort on a single
column specify an array with a single entry.sortObserver
- An object that is used to observe
the sort. It is used to provide a callback into the sorter.
If the sortObserver is null, then the sort proceeds as normal.
If the sortObserver is non null, then it is called as
rows are loaded into the sorter. It can be used to implement
a distinct sort, aggregates, etc.alreadyInOrder
- Indicates that the rows inserted into
the sort controller will already be in order. This is used
to perform aggregation only.estimatedRows
- The number of rows that the caller
estimates will be inserted into the sort. -1 indicates that
the caller has no idea. Used by the sort to make good choices
about in-memory vs. external sorting, and to size merge runs.estimatedRowSize
- The estimated average row size of the
rows being sorted. This is the client portion of the rowsize, it should
not attempt to calculate Store's overhead. -1 indicates that the caller
has no idea (and the sorter will use 100 bytes in that case. Used by the
sort to make good choices about in-memory vs. external sorting, and to size
merge runs. The client is not expected to estimate the per column/
per row overhead of raw store, just to make a guess about the storage
associated with each row (ie. reasonable estimates for some implementations
would be 4 for int, 8 for long, 102 for char(100),
202 for varchar(200), a number out of hat for user types, ...).StandardException
- From a lower-level exception.SortObserver
,
ColumnOrdering
,
ScanController
,
SortController
void dropSort(long sortid) throws StandardException
Drop a sort created by a call to createSort() within the current transaction (sorts are automatically "dropped" at the end of a transaction. This call should only be made after all openSortScan()'s and openSort()'s have been closed.
sortid
- The identifier of the sort to drop, as returned from
createSort.
StandardException
- From a lower-level exception.SortController openSort(long id) throws StandardException
There may (in the future) be multiple sort inserters for a given sort, the idea being that the various threads of a parallel query plan can all insert into the sort. For now, however, only a single sort controller per sort is supported.
id
- The identifier of the sort to open, as returned from
createSort.StandardException
- From a lower-level exception.SortCostController openSortCostController() throws StandardException
Return an open SortCostController which can be used to ask about the estimated costs of SortController() operations.
StandardException
- Standard exception policy.StoreCostController
RowLocationRetRowSource openSortRowSource(long id) throws StandardException
id
- The identifier of the sort to scan, as returned
from createSort.StandardException
- From a lower-level exception.ScanController openSortScan(long id, boolean hold) throws StandardException
In the future, multiple sort scans on the same sort will be supported (for parallel execution across a uniqueness sort in which the order of the resulting rows is not important). Currently, only a single sort scan is allowed per sort.
In the future, it will be possible to open a sort scan and start retrieving rows before the last row is inserted. The sort controller would block till rows were available to return. Currently, an attempt to retrieve a row before the sort controller is closed will cause an exception.
id
- The identifier of the sort to scan, as returned from createSort.hold
- If true, this scan will be maintained open over commits.StandardException
- From a lower-level exception.boolean anyoneBlocked()
void abort() throws StandardException
StandardException
- Only exceptions with severities greater than
ExceptionSeverity.TRANSACTION_SEVERITY will be thrown.void commit() throws StandardException
StandardException
- Only exceptions with severities greater than
ExceptionSeverity.TRANSACTION_SEVERITY will be thrown.
If an exception is thrown, the transaction will not (necessarily) have
been aborted. The standard error handling mechanism is expected to do the
appropriate cleanup. In other words, if commit() encounters an error, the
exception is propagated up to the the standard exception handler, which
initiates cleanupOnError() processing, which will eventually abort the
transaction.DatabaseInstant commitNoSync(int commitflag) throws StandardException
StandardException
- Only exceptions with severities greater than
ExceptionSeverity.TRANSACTION_SEVERITY will be thrown.
If an exception is thrown, the transaction will not (necessarily) have
been aborted. The standard error handling mechanism is expected to do the
appropriate cleanup. In other words, if commit() encounters an error, the
exception is propagated up to the the standard exception handler, which
initiates cleanupOnError() processing, which will eventually abort the
transaction.commit()
void destroy()
ContextManager getContextManager()
java.lang.String getTransactionIdString()
This transaction "name" will be the same id which is returned in the TransactionInfo information, used by the lock and transaction vti's to identify transactions.
Although implementation specific, the transaction id is usually a number which is bumped every time a commit or abort is issued.
java.lang.String getActiveStateTxIdString()
boolean isIdle()
boolean isGlobal()
AccessFactory.startXATransaction(org.apache.derby.iapi.services.context.ContextManager, int, byte[], byte[])
,
createXATransactionFromLocalTransaction(int, byte[], byte[])
boolean isPristine()
int releaseSavePoint(java.lang.String name, java.lang.Object kindOfSavepoint) throws StandardException
name
- The user provided name of the savepoint, set by the user
in the setSavePoint() call.kindOfSavepoint
- A NULL value means it is an internal savepoint (ie not a user defined savepoint)
Non NULL value means it is a user defined savepoint which can be a SQL savepoint or a JDBC savepoint
A String value for kindOfSavepoint would mean it is SQL savepoint
A JDBC Savepoint object value for kindOfSavepoint would mean it is JDBC savepointStandardException
- Standard Derby exception policy. A
statement level exception is thrown if
no savepoint exists with the given name.int rollbackToSavePoint(java.lang.String name, boolean close_controllers, java.lang.Object kindOfSavepoint) throws StandardException
if "close_controllers" is true then all conglomerates and scans are closed (held or non-held).
If "close_controllers" is false then no cleanup is done by the TransactionController. It is then the responsibility of the caller to close all resources that may have been affected by the statements backed out by the call. This option is meant to be used by the Language implementation of statement level backout, where the system "knows" what could be affected by the scope of the statements executed within the statement.
name
- The identifier of the SavePoint to roll back to.close_controllers
- boolean indicating whether or not the controller
should close open controllers.kindOfSavepoint
- A NULL value means it is an internal savepoint (ie not a user defined savepoint)
Non NULL value means it is a user defined savepoint which can be a SQL savepoint or a JDBC savepoint
A String value for kindOfSavepoint would mean it is SQL savepoint
A JDBC Savepoint object value for kindOfSavepoint would mean it is JDBC savepointStandardException
- Standard Derby exception policy. A
statement level exception is thrown if
no savepoint exists with the given name.int setSavePoint(java.lang.String name, java.lang.Object kindOfSavepoint) throws StandardException
name
- The user provided name of the savepoint.kindOfSavepoint
- A NULL value means it is an internal savepoint (ie not a user defined savepoint)
Non NULL value means it is a user defined savepoint which can be a SQL savepoint or a JDBC savepoint
A String value for kindOfSavepoint would mean it is SQL savepoint
A JDBC Savepoint object value for kindOfSavepoint would mean it is JDBC savepointStandardException
- Standard Derby exception policy. A
statement level exception is thrown if
no savepoint exists with the given name.java.lang.Object createXATransactionFromLocalTransaction(int format_id, byte[] global_id, byte[] branch_id) throws StandardException
Get a transaction controller with which to manipulate data within the access manager. Tbis controller allows one to manipulate a global XA conforming transaction.
Must only be called a previous local transaction was created and exists in the context. Can only be called if the current transaction is in the idle state. Upon return from this call the old tc will be unusable, and all references to it should be dropped (it will have been implicitly destroy()'d by this call.
The (format_id, global_id, branch_id) triplet is meant to come exactly from a javax.transaction.xa.Xid. We don't use Xid so that the system can be delivered on a non-1.2 vm system and not require the javax classes in the path.
global_id
- the global transaction identifier part of XID - ie.
Xid.getGlobalTransactionId().branch_id
- The branch qualifier of the Xid - ie.
Xid.getBranchQaulifier()StandardException
- Standard exception policy.TransactionController
Apache Derby V10.13 Internals - Copyright © 2004,2016 The Apache Software Foundation. All Rights Reserved.