otp.ObSnapshot#

ObSnapshot(running=False, bucket_interval=0, bucket_time='end', bucket_units=None, bucket_end_condition=None, end_condition_per_group=False, group_by=None, groups_to_display='all', side=None, max_levels=None, max_depth_shares=None, max_depth_for_price=None, max_spread=None, book_uncross_method=None, dq_events_that_clear_book=None, identify_source=False, show_full_detail=False, show_only_changes=False, book_delimiters=None, max_initialization_days=1, state_key_max_inactivity_sec=None, size_max_fractional_digits=0, include_market_order_ticks=None, db=None, symbol=<class 'onetick.py.utils.types.adaptive'>, tick_type=<class 'onetick.py.utils.types.adaptive'>, start=<class 'onetick.py.utils.types.adaptive'>, end=<class 'onetick.py.utils.types.adaptive'>, date=None, schema_policy=<class 'onetick.py.utils.types.adaptive'>, guess_schema=None, identify_input_ts=False, back_to_first_tick=0, keep_first_tick_timestamp=None, max_back_ticks_to_prepend=1, where_clause_for_back_ticks=None, symbols=None, presort=<class 'onetick.py.utils.types.adaptive'>, batch_size=None, concurrency=<class 'onetick.py.utils.types.default'>, schema=None, symbol_date=None, **kwargs)#

Construct a source providing order book snapshot for a given db. This is just a shortcut for DataSource + ob_snapshot().

Parameters:
  • running (bool, default=False) –

    See Aggregation buckets guide to see examples of how this parameter works.

    Specifies if the aggregation will be calculated as a sliding window. running and bucket_interval parameters determines when new buckets are created.

    • running = True

      aggregation will be calculated in a sliding window.

      • bucket_interval = N (N > 0)

        Window size will be N. Output tick will be generated when tick “enter” window (arrival event) and when “exit” window (exit event)

      • bucket_interval = 0

        Left boundary of window will be set to query start time. For each tick aggregation will be calculated in the interval [start_time; tick_t] from query start time to the tick’s timestamp (inclusive).

    • running = False (default)

      buckets partition the [query start time, query end time) interval into non-overlapping intervals of size bucket_interval (with the last interval possibly of a smaller size). If bucket_interval is set to 0 a single bucket for the entire interval is created.

      Note that in non-running mode OneTick unconditionally divides the whole time interval into specified number of buckets. It means that you will always get this specified number of ticks in the result, even if you have less ticks in the input data.

    Default: False

  • bucket_interval (int or float or Operation or OnetickParameter or symbol parameter or datetime offset object, default=0) –

    Determines the length of each bucket (units depends on bucket_units).

    If Operation of bool type is passed, acts as bucket_end_condition.

    Bucket interval can also be set as a float value if bucket_units is set to seconds. Note that values less than 0.001 (1 millisecond) are not supported.

    Bucket interval can be set via some of the datetime offset objects: otp.Milli, otp.Second, otp.Minute, otp.Hour, otp.Day, otp.Month. In this case you could omit setting bucket_units parameter.

    Bucket interval can also be set with integer OnetickParameter or symbol parameter.

  • bucket_time (Literal['start', 'end'], default=end) –

    Control output timestamp.

    • start

      the timestamp assigned to the bucket is the start time of the bucket.

    • end

      the timestamp assigned to the bucket is the end time of the bucket.

  • bucket_units (Optional[Literal['seconds', 'days', 'months', 'flexible']], default=None) –

    Set bucket interval units.

    By default, if bucket_units and bucket_end_condition not specified, set to seconds. If bucket_end_condition specified, then bucket_units set to flexible.

    If set to flexible then bucket_end_condition must be set.

    Note that seconds bucket unit doesn’t take into account daylight-saving time of the timezone, so you may not get expected results when using, for example, 24 * 60 * 60 seconds as bucket interval. In such case use days bucket unit instead. See example in onetick.py.agg.sum().

  • bucket_end_condition (condition, default=None) –

    An expression that is evaluated on every tick. If it evaluates to “True”, then a new bucket is created. This parameter is only used if bucket_units is set to “flexible”.

    Also can be set via bucket_interval parameter by passing Operation object.

  • end_condition_per_group (bool, default=False) –

    Controls application of bucket_end_condition in groups.

    • end_condition_per_group = True

      bucket_end_condition is applied only to the group defined by group_by

    • end_condition_per_group = False

      bucket_end_condition applied across all groups

    This parameter is only used if bucket_units is set to “flexible”.

    When set to True, applies to all bucketing conditions. Useful, for example, if you need to specify group_by, and you want to group items first, and create buckets after that.

  • group_by (list, str or expression, default=None) – When specified, each bucket is broken further into additional sub-buckets based on specified field values. If Operation is used then GROUP_{i} column is added. Where i is index in group_by list. For example, if Operation is the only element in group_by list then GROUP_0 field will be added.

  • groups_to_display (Literal['all', 'previous'], default=all) – Specifies for which sub-buckets (groups) ticks should be shown for each bucket interval. By default all groups are shown at the end of each bucket interval. If this parameter is set to event_in_last_bucket, only the groups that received at least one tick within a given bucket interval are shown.

  • side (Literal['ASK', 'BID'], default=None) – Specifies whether the function is to be applied to sell orders (ASK), buy orders (BID), or both (empty).

  • max_levels (int, default=None) – Number of order book levels (between 1 and 100_000) that need to be computed. If empty, all levels will be computed.

  • max_depth_shares (int, default=None) – The total number of shares (i.e., the combined SIZE across top several levels of the book) that determines the number of order book levels that need to be part of the order book computation. If that number of levels exceeds max_levels, only max_levels levels of the book will be computed. The shares in excess of max_depth_shares, from the last included level, are not taken into account.

  • max_depth_for_price (float, default=None) – The multiplier, product of which with the price at the top level of the book determines maximum price distance from the top of the book for the levels that are to be included into the book. In other words, only bids at <top_price>*(1-max_depth_for_price) and above and only asks of <top_price>*(1+`max_depth_for_price`) and less will be returned. If the number of the levels that are to be included into the book, according to this criteria, exceeds max_levels, only max_levels levels of the book will be returned.

  • max_spread (float, default=None) – An absolute value, price levels with price that satisfies abs(<MID price> - <order price>) <= max_spread/2 contribute to computed book. If max_spread is specified, side should not be specified. Empty book is returned when one side is empty.

  • book_uncross_method (Literal['REMOVE_OLDER_CROSSED_LEVELS'], default=None) – When set to “REMOVE_OLDER_CROSSED_LEVELS”, all ask levels that have price lower or equal to the price of a new bid tick get removed from the book, and all bid levels that have price higher or equal to the price of a new ask tick get removed from the book.

  • dq_events_that_clear_book (List[str], default=None) – A list of names of data quality events arrival of which should clear the order book.

  • identify_source (bool, default=False) – When this parameter is set to “true” and the input stream is fed through the VIRTUAL_OB event processor (with the QUOTE_SOURCE_FIELDS parameter specified) and group_by is not set to be “SOURCE” it will separate a tick with the same price from different sources into multiple ticks. The parameter can also be used when merging ticks from multiple feeds. Each feed going into the merge would need an ADD_FIELD EP source value set for the VALUE parameter, where the value would be different for each leg.

  • show_full_detail (bool, default=False) – When set to “true” and if the state key of the input ticks consists of some fields besides PRICE, output ticks will contain all fields from the input ticks for each price level. When set to “false” only PRICE, UPDATE_TIME, SIZE, LEVEL, and BUY_SELL_FLAG fields will be populated. Note: setting this flag to “true” has no effect on a time series that does not have a state key.

  • show_only_changes (bool, default=False) –

    When set to true, the output stream carries only changes to the book. The representation is as follows:
    • Changed and added levels are represented by themselves.

    • Deleted levels are shown with a size and level of zero.

    As with other modes, correct detection of update boundaries may require setting the book_delimiters option.

  • book_delimiters (Literal['D'], default=None) – When set to “D” an extra tick is created after each book. Also, an additional column, called DELIMITER, is added to output ticks. The extra tick has values of all fields set to the defaults (0,NaN,””), except the delimiter field, which is set to “D.” All other ticks have the DELIMITER set to zero (0).

  • max_initialization_days (int, default=1) – This parameter specifies how many days back book event processors should go in order to find the latest full state of the book. The query will not go back resulting number of days if it finds initial book state earlier. When book event processors are used after VIRTUAL_OB EP, this parameter should be set to 0. When set, this parameter takes precedence over the configuration parameter BOOKS.MAX_INITIALIZATION_DAYS.

  • state_key_max_inactivity_sec (int, default=None) – If set, specifies in how many seconds after it was added a given state key should be automatically removed from the book.

  • size_max_fractional_digits (int, default=0) – Specifies maximum number of digits after dot in SIZE, if SIZE can be fractional.

  • include_market_order_ticks (bool, default=None) –

    If set, market order ticks (they have price NaN) are included into the order book, and are at the order book’s top level.

    Default is False.

  • db (str, list of str, otp.DB, default=None) –

    Name(s) of the database or the database object(s).

    When passing a single database, the tick type can be embedded in the name using 'DB_NAME::TICK_TYPE' format (e.g., 'NYSE_TAQ::TRD').

    When passing a list of databases, each entry can include its own tick type (e.g., ['NYSE_TAQ::TRD', 'CME::QTE']). If some entries lack a tick type, the tick_type parameter is used to fill them in.

    When None, the database is expected to come as part of the symbol name (e.g., 'DB::SYMBOL'), and tick_type must be set explicitly.

  • symbol (str, list of str, Source, query, eval query, default= onetick.py.adaptive) – Symbol(s) from which data should be taken.

  • tick_type (str, list of str, default= onetick.py.adaptive) –

    Tick type of the data (e.g., 'TRD' for trades, 'QTE' for quotes).

    When adaptive (default), the tick type is auto-detected from the database. If auto-detection fails or multiple databases are specified, defaults to 'TRD'.

    Can be a list of strings (e.g., ['TRD', 'QTE']) to merge multiple tick types from the same database into a single data flow.

  • start (datetime.datetime, otp.datetime, onetick.py.adaptive, default= onetick.py.adaptive) – Start of the interval from which the data should be taken. Default is onetick.py.adaptive, making the final query deduce the time limits from the rest of the graph.

  • end (datetime.datetime, otp.datetime, onetick.py.adaptive, default= onetick.py.adaptive) – End of the interval from which the data should be taken. Default is onetick.py.adaptive, making the final query deduce the time limits from the rest of the graph.

  • date (datetime.datetime, otp.datetime, default=None) – Allows to specify a whole day instead of passing explicitly start and end parameters. If it is set along with the start and end parameters then last two are ignored.

  • schema_policy (‘tolerant’, ‘tolerant_strict’, ‘fail’, ‘fail_strict’, ‘manual’, ‘manual_strict’, default= onetick.py.adaptive) –

    Schema deduction policy:

    • ’tolerant’ (default) The resulting schema is a combination of schema and database schema. If the database schema can be deduced, it’s checked to be type-compatible with a schema, and ValueError is raised if checks are failed. Also, with this policy database is scanned 5 days back to find the schema. It is useful when database is misconfigured or in case of holidays.

    • ’tolerant_strict’ The resulting schema will be schema if it’s not empty. Otherwise, database schema is used. If the database schema can be deduced, it’s checked if it lacks fields from the schema and it’s checked to be type-compatible with a schema and ValueError is raised if checks are failed. Also, with this policy database is scanned 5 days back to find the schema. It is useful when database is misconfigured or in case of holidays.

    • ’fail’ The same as ‘tolerant’, but if the database schema can’t be deduced, raises an Exception.

    • ’fail_strict’ The same as ‘tolerant_strict’, but if the database schema can’t be deduced, raises an Exception.

    • ’manual’ The resulting schema is a combination of schema and database schema. Compatibility with database schema will not be checked.

    • ’manual_strict’ The resulting schema will be exactly schema. Compatibility with database schema will not be checked. If some fields specified in schema do not exist in the database, their values will be set to some default value for a type (0 for integers, NaNs for floats, empty string for strings, epoch for datetimes).

    Default value is onetick.py.adaptive (if deprecated parameter guess_schema is not set). If guess_schema is set to True then value is ‘fail’, if False then ‘manual’. If schema_policy is set to None then default value is ‘tolerant’.

    Default value can be changed with otp.config.default_schema_policy configuration parameter.

    If you set schema manually, while creating DataSource instance, and don’t set schema_policy, it will be automatically set to manual.

  • guess_schema (bool, default=None) –

    Deprecated since version 1.3.16.

    Use schema_policy parameter instead.

    If guess_schema is set to True then schema_policy value is ‘fail’, if False then ‘manual’.

  • identify_input_ts (bool, default=False) – If True, adds SYMBOL_NAME and TICK_TYPE fields to every output tick, identifying which symbol and tick type each tick came from. Especially useful when merging multiple symbols to distinguish the source of each tick.

  • back_to_first_tick (int, offset, otp.expr, Operation, default=0) –

    Determines how far back (in seconds) to search for the latest tick before start time. If one is found, it is prepended to the output with its timestamp changed to start time. This is useful for initializing state (e.g., getting the last known price before market open).

    Accepts an integer (seconds), a time offset like otp.Day(1) or otp.Hour(2), or an otp.expr for dynamic values.

    Note: the value is rounded to whole seconds, so otp.Millis(999) becomes 0. Use with keep_first_tick_timestamp to preserve the original tick time, and max_back_ticks_to_prepend to retrieve more than one historical tick.

  • keep_first_tick_timestamp (str, default=None) –

    Name for a new nsectime field that stores the original timestamp of prepended ticks. For ticks within the query interval, this field equals the Time field. For ticks prepended by back_to_first_tick, it contains their true historical timestamp (before it was overwritten with start time).

    This parameter is ignored if back_to_first_tick is 0.

  • max_back_ticks_to_prepend (int, default=1) –

    Maximum number of the most recent ticks before start time to prepend to the output. Only used when back_to_first_tick is non-zero. All prepended ticks have their timestamp changed to start time. Must be at least 1.

    For example, to get the last 5 trades before market open, set back_to_first_tick=otp.Day(1) and max_back_ticks_to_prepend=5.

  • where_clause_for_back_ticks (onetick.py.core.column_operations.base.Raw, default=None) –

    A filter expression applied only to ticks found during the backward search (controlled by back_to_first_tick). Ticks where this expression evaluates to False are skipped and not prepended.

    Must be an otp.raw expression with dtype=bool. For example, otp.raw('SIZE>=100', dtype=bool) keeps only ticks with SIZE >= 100.

  • symbols (str, list of str, Source, query, eval query, onetick.query.GraphQuery., default=None) – Symbol(s) from which data should be taken. Alias for symbol parameter. Will take precedence over it.

  • presort (bool, default= onetick.py.adaptive) –

    Controls whether to use a PRESORT Event Processor when querying multiple bound symbols. PRESORT parallelizes data fetching across symbols and merges results in timestamp order, which is generally faster than sequential MERGE for large symbol lists.

    Applicable only when symbols is set. By default, True when symbols is set, False otherwise. Set to False to use sequential MERGE instead.

  • batch_size (int, default=None) –

    Number of symbols to process in each batch during presort execution. Larger batch sizes reduce overhead but use more memory. Only applicable when presort is True.

    By default, the value from otp.config.default_batch_size is used.

  • concurrency (int, default= onetick.py.utils.default) –

    Specifies the number of CPU cores to utilize for the presort. By default, the value is inherited from the value of the query where this PRESORT is used.

    For the main query it may be specified in the concurrency parameter of run() method (which by default is set to otp.config.default_concurrency).

    For the auxiliary queries (like first-stage queries) empty value means OneTick’s default of 1. If otp.config.presort_force_default_concurrency is set then default concurrency value will be set in all PRESORT EPs in all queries.

  • schema (Optional[Dict[str, type]], default=None) –

    Dict of column name to column type pairs that the source is expected to have.

    Supported types: int, float, str, otp.string[N], otp.varstring[N], otp.nsectime, otp.msectime, otp.decimal, bytes.

    If the type of a column is irrelevant, provide None as the type.

    How the schema is used depends on schema_policy. When schema is set and schema_policy is not explicitly provided, schema_policy defaults to 'manual'.

  • symbol_date (otp.datetime or datetime.datetime or int, default=None) –

    Date used for symbol resolution in date-dependent symbologies, where the same symbol identifier can map to different instruments on different dates.

    Accepts otp.datetime, datetime.datetime, or an integer in the YYYYMMDD format (e.g., 20220301).

    Can only be specified when symbols is set. If symbols is a plain list of strings, it is internally converted to a first-stage query with the given symbol_date.

  • kwargs (type[str]) – Deprecated. Use schema instead. List of <column name> -> <column type> pairs that the source is expected to have. If the type is irrelevant, provide None as the type in question.

Examples

>>> data = otp.ObSnapshot(db='SOME_DB', tick_type='PRL', symbols='AA', max_levels=1) 
>>> otp.run(data) 
        Time  PRICE             UPDATE_TIME  SIZE  LEVEL  BUY_SELL_FLAG
0 2003-12-04    2.0 2003-12-01 00:00:00.003     6      1              1
1 2003-12-04    5.0 2003-12-01 00:00:00.004     7      1              0