otp.ObSnapshotWide#
- ObSnapshotWide(running=False, bucket_interval=0, bucket_time='end', bucket_units=None, bucket_end_condition=None, end_condition_per_group=False, group_by=None, max_levels=None, max_depth_shares=None, max_depth_for_price=None, book_uncross_method=None, dq_events_that_clear_book=None, book_delimiters=None, max_initialization_days=1, state_key_max_inactivity_sec=None, size_max_fractional_digits=0, db=None, symbol=<class 'onetick.py.utils.types.adaptive'>, tick_type=<class 'onetick.py.utils.types.adaptive'>, start=<class 'onetick.py.utils.types.adaptive'>, end=<class 'onetick.py.utils.types.adaptive'>, date=None, schema_policy=<class 'onetick.py.utils.types.adaptive'>, guess_schema=None, identify_input_ts=False, back_to_first_tick=0, keep_first_tick_timestamp=None, max_back_ticks_to_prepend=1, where_clause_for_back_ticks=None, symbols=None, presort=<class 'onetick.py.utils.types.adaptive'>, batch_size=None, concurrency=<class 'onetick.py.utils.types.default'>, schema=None, **kwargs)#
Construct a source providing order book wide snapshot for a given
db
. This is just a shortcut forDataSource
+ob_snapshot_wide()
.- Parameters
running (bool, default=False) –
Aggregation will be calculated as sliding window.
running
andbucket_interval
parameters determines when new buckets are created.running
= Trueaggregation will be calculated in a sliding window.
bucket_interval
= N (N > 0)Window size will be N. Output tick will be generated when tick “enter” window (arrival event) and when “exit” window (exit event)
bucket_interval
= 0Left boundary of window will be bound to start time. For each tick aggregation will be calculated in [start_time; tick_t].
running
= Falsebuckets partition the [query start time, query end time) interval into non-overlapping intervals of size
bucket_interval
(with the last interval possibly of a smaller size). Ifbucket_interval
is set to 0 a single bucket for the entire interval is created.Note that in non-running mode OneTick unconditionally divides the whole time interval into specified number of buckets. It means that you will always get this specified number of ticks in the result, even if you have less ticks in the input data.
Default: False
bucket_interval (int or
Operation
orOnetickParameter
orsymbol parameter
or datetime offset object, default=0) –Determines the length of each bucket (units depends on
bucket_units
).If
Operation
of bool type is passed, acts asbucket_end_condition
.Bucket interval can be set via datetime offset objects like
otp.Second
,otp.Minute
,otp.Hour
,otp.Day
,otp.Month
. In this case you could omit settingbucket_units
parameter.Bucket interval can also be set with integer
OnetickParameter
orsymbol parameter
.bucket_time (Literal['start', 'end'], default=end) –
Control output timestamp.
start
the timestamp assigned to the bucket is the start time of the bucket.
end
the timestamp assigned to the bucket is the end time of the bucket.
bucket_units (Optional[Literal['seconds', 'ticks', 'days', 'months', 'flexible']], default=None) –
Set bucket interval units.
By default, if
bucket_units
andbucket_end_condition
not specified, set to seconds. Ifbucket_end_condition
specified, thenbucket_units
set to flexible.If set to flexible then
bucket_end_condition
must be set.Note that seconds bucket unit doesn’t take into account daylight-saving time of the timezone, so you may not get expected results when using, for example, 24 * 60 * 60 seconds as bucket interval. In such case use days bucket unit instead. See example in
onetick.py.agg.sum()
.bucket_end_condition (condition, default=None) –
An expression that is evaluated on every tick. If it evaluates to “True”, then a new bucket is created. This parameter is only used if
bucket_units
is set to “flexible”.Also can be set via
bucket_interval
parameter by passingOperation
object.end_condition_per_group (bool, default=False) –
Controls application of
bucket_end_condition
in groups.end_condition_per_group
= Truebucket_end_condition
is applied only to the group defined bygroup_by
end_condition_per_group
= Falsebucket_end_condition
applied across all groups
This parameter is only used if
bucket_units
is set to “flexible”.When set to True, applies to all bucketing conditions. Useful, for example, if you need to specify
group_by
, and you want to group items first, and create buckets after that.group_by (list, str or expression, default=None) – When specified, each bucket is broken further into additional sub-buckets based on specified field values. If
Operation
is used then GROUP_{i} column is added. Where i is index in group_by list. For example, if Operation is the only element ingroup_by
list then GROUP_0 field will be added.max_levels (int, default=None) – Number of order book levels (between 1 and 100_000) that need to be computed. If empty, all levels will be computed.
max_depth_shares (int, default=None) – The total number of shares (i.e., the combined SIZE across top several levels of the book) that determines the number of order book levels that need to be part of the order book computation. If that number of levels exceeds max_levels, only max_levels levels of the book will be computed. The shares in excess of max_depth_shares, from the last included level, are not taken into account.
max_depth_for_price (float, default=None) – The multiplier, product of which with the price at the top level of the book determines maximum price distance from the top of the book for the levels that are to be included into the book. In other words, only bids at <top_price>*(1-max_depth_for_price) and above and only asks of <top_price>*(1+`max_depth_for_price`) and less will be returned. If the number of the levels that are to be included into the book, according to this criteria, exceeds max_levels, only max_levels levels of the book will be returned.
book_uncross_method (Literal['REMOVE_OLDER_CROSSED_LEVELS'], default=None) – When set to “REMOVE_OLDER_CROSSED_LEVELS”, all ask levels that have price lower or equal to the price of a new bid tick get removed from the book, and all bid levels that have price higher or equal to the price of a new ask tick get removed from the book.
dq_events_that_clear_book (List[str], default=None) – A list of names of data quality events arrival of which should clear the order book.
book_delimiters (Literal['D'], default=None) – When set to “D” an extra tick is created after each book. Also, an additional column, called DELIMITER, is added to output ticks. The extra tick has values of all fields set to the defaults (0,NaN,””), except the delimiter field, which is set to “D.” All other ticks have the DELIMITER set to zero (0).
max_initialization_days (int, default=1) – This parameter specifies how many days back book event processors should go in order to find the latest full state of the book. The query will not go back resulting number of days if it finds initial book state earlier. When book event processors are used after VIRTUAL_OB EP, this parameter should be set to 0. When set, this parameter takes precedence over the configuration parameter BOOKS.MAX_INITIALIZATION_DAYS.
state_key_max_inactivity_sec (int, default=None) – If set, specifies in how many seconds after it was added a given state key should be automatically removed from the book.
size_max_fractional_digits (int, default=0) – Specifies maximum number of digits after dot in SIZE, if SIZE can be fractional.
db (str, list of str,
otp.DB
, default=None) – Name(s) of the database or the database object(s).symbol (str, list of str,
Source
,query
,eval query
, default=onetick.py.adaptive
) – Symbol(s) from which data should be taken.tick_type (str, list of str, default=
onetick.py.adaptive
) – Tick type of the data. If not specified, all ticks from db will be taken. If ticks can’t be found or there are many databases specified in db then default is “TRD”.start (
datetime.datetime
,otp.datetime
,onetick.py.adaptive
, default=onetick.py.adaptive
) – Start of the interval from which the data should be taken. Default isonetick.py.adaptive
, making the final query deduce the time limits from the rest of the graph.end (
datetime.datetime
,otp.datetime
,onetick.py.adaptive
, default=onetick.py.adaptive
) – End of the interval from which the data should be taken. Default isonetick.py.adaptive
, making the final query deduce the time limits from the rest of the graph.date (
datetime.datetime
,otp.datetime
, default=None) – Allows to specify a whole day instead of passing explicitlystart
andend
parameters. If it is set along with thestart
andend
parameters then last two are ignored.schema_policy (‘tolerant’, ‘tolerant_strict’, ‘fail’, ‘fail_strict’, ‘manual’, ‘manual_strict’, default=
onetick.py.adaptive
) –Schema deduction policy:
’tolerant’ (default) The resulting schema is a combination of
schema
and database schema. If the database schema can be deduced, it’s checked to be type-compatible with aschema
, and ValueError is raised if checks are failed. Also, with this policy database is scanned 5 days back to find the schema. It is useful when database is misconfigured or in case of holidays.’tolerant_strict’ The resulting schema will be
schema
if it’s not empty. Otherwise, database schema is used. If the database schema can be deduced, it’s checked if it lacks fields from theschema
and it’s checked to be type-compatible with aschema
and ValueError is raised if checks are failed. Also, with this policy database is scanned 5 days back to find the schema. It is useful when database is misconfigured or in case of holidays.’fail’ The same as ‘tolerant’, but if the database schema can’t be deduced, raises an Exception.
’fail_strict’ The same as ‘tolerant_strict’, but if the database schema can’t be deduced, raises an Exception.
’manual’ The resulting schema is a combination of
schema
and database schema. Compatibility with database schema will not be checked.’manual_strict’ The resulting schema will be exactly
schema
. Compatibility with database schema will not be checked. If some fields specified inschema
do not exist in the database, their values will be set to some default value for a type (0 for integers, NaNs for floats, empty string for strings, epoch for datetimes).
Default value is ‘tolerant’ (if deprecated parameter
guess_schema
is not set). Ifguess_schema
is set to True then value is ‘fail’, if False then ‘manual’.Default value can be changed with
otp.config.default_schema_policy
configuration parameter.guess_schema (bool, default=None) –
Deprecated since version 1.3.16.
Use
schema_policy
parameter instead.If
guess_schema
is set to True thenschema_policy
value is ‘fail’, if False then ‘manual’.identify_input_ts (bool, default=False) – If set to False, the fields SYMBOL_NAME and TICK_TYPE are not appended to the output ticks.
back_to_first_tick (int, offset,
otp.expr
,Operation
, default=0) – Determines how far back to go looking for the latest tick beforestart
time. If one is found, it is inserted into the output time series with the timestamp set tostart
time. Note: it will be rounded to int, so otp.Millis(999) will be 0 seconds.keep_first_tick_timestamp (str, default=None) – If set, new field with this name will be added to source. This field contains original timestamp of the tick that was taken from before the start time of the query. For all other ticks value in this field will be equal to the value of Time field. This parameter is ignored if
back_to_first_tick
is not set.max_back_ticks_to_prepend (int, default=1) – When the
back_to_first_tick
interval is specified, this parameter determines the maximum number of the most recent ticks before start_time that will be prepended to the output time series. Their timestamp will be changed to start_time.where_clause_for_back_ticks (onetick.py.core.column_operations.base.Raw, default=None) – A logical expression that is computed only for the ticks encountered when a query goes back from the start time, in search of the ticks to prepend. If it returns false, a tick is ignored.
symbols (str, list of str,
Source
,query
,eval query
,onetick.query.GraphQuery
., default=None) – Symbol(s) from which data should be taken. Alias forsymbol
parameter. Will take precedence over it.presort (bool, default=
onetick.py.adaptive
) – Add the presort EP in case of bound symbols. Applicable only whensymbols
is not None. By default, it is set to True ifsymbols
are set and to False otherwise.batch_size (int, default=None) – Specifies the query batch size for the
presort
. By default, the value fromotp.config.default_batch_size
is used.concurrency (int, default=
onetick.py.utils.default
) – Specifies number of CPU cores to utilize for thepresort
By default, the value is inherited from the value of original query specified in theconcurrency
parameter ofrun()
method (which by default is set tootp.config.default_concurrency
).schema (Optional[Dict[str, type]], default=None) – Dict of <column name> -> <column type> pairs that the source is expected to have. If the type is irrelevant, provide None as the type in question.
kwargs (type[str]) – Deprecated. Use
schema
instead. List of <column name> -> <column type> pairs that the source is expected to have. If the type is irrelevant, provide None as the type in question.
Examples
>>> data = otp.ObSnapshotWide(db='SOME_DB', tick_type='PRL', symbols='AA', max_levels=1) >>> otp.run(data) Time BID_PRICE BID_UPDATE_TIME BID_SIZE ASK_PRICE ASK_UPDATE_TIME ASK_SIZE LEVEL 0 2003-12-03 5.0 2003-12-01 00:00:00.004 7 2.0 2003-12-01 00:00:00.003 6 1
See also
OB_SNAPSHOT_WIDE OneTick event processor