All Classes and Interfaces

Class
Description
 
Base implementation of a AggregationBuilder.
Abstract base class for allocating an unassigned shard to a node
Works around ObjectParser not supporting constructor arguments.
An abstract class for representing various types of allocation decisions.
A base class for tasks that need to repeat.
Base implementation that throws an IOException for the DocIdSetIterator APIs.
Bind a value or constant.
A base abstract blob container that adds some methods implementations that are often identical across many subclasses.
 
 
 
 
 
 
 
 
Abstract base for scripts to execute to build scripted fields.
A skeleton service for watching and reacting to a single file changing on disk
Base field mapper class for all spatial field types
 
Interface representing parser in geometry indexing pipeline.
Base QueryBuilder that builds a Geometry Query
local class that encapsulates xcontent parsed shape parameters
This abstract class holds parameters shared by HighlightBuilder and HighlightBuilder.Field and provides the common setters, equality, hashCode calculation and common serialization
Base class for functionality shared between aggregators for this histogram aggregation.
 
Hyperloglog counter, implemented based on pseudo code from http://static.googleusercontent.com/media/research.google.com/fr//pubs/archive/40671.pdf and its appendix https://docs.google.com/document/d/1gyjfMHy43U9OWBXxfaeG-3MjGzejW1dlpyMwEYAAWEI/view?fullscreen Trying to understand what this class does without having read the paper is considered adventurous.
Iterator over a HyperLogLog register
Base class for HLL++ algorithms.
AbstractIndexAnalyzerProvider<T extends org.apache.lucene.analysis.Analyzer>
 
 
 
A PerValueEstimator is a sub-class that can be used to estimate the memory overhead for loading the data.
 
 
Base class for terms and multi_terms aggregation that handles common reduce logic
 
 
A component with a Lifecycle which is used to link its start and stop activities to those of the Elasticsearch node which contains it.
Linear counter, implemented based on pseudo code from http://static.googleusercontent.com/media/research.google.com/fr//pubs/archive/40671.pdf and its appendix https://docs.google.com/document/d/1gyjfMHy43U9OWBXxfaeG-3MjGzejW1dlpyMwEYAAWEI/view?fullscreen Trying to understand what this class does without having read the paper is considered adventurous.
Iterator over the hash values
 
Common base class for script field scripts that return long values.
Implements and() and or().
A support class for Modules which reduces repetition and results in a more readable configuration.
Abstract diffable object with simple diffs implementation that sends the entire object if object has changed or nothing is object remained the same.
Base implementation that throws an IOException for the DocIdSetIterator APIs.
This provides a base class for aggregations that are building percentiles or percentiles-like functionality (e.g.
Base implementation of a PipelineAggregationBuilder.
Base class for spatial fields that only support indexing points
 
A base parser implementation for point formats
 
An Abstract Processor that holds tag and description information about the processor.
A record of timings for the various operations that may happen during query execution.
 
Base class for all classes producing lucene queries.
 
 
 
 
Abstract resource watcher framework, which handles adding and removing listeners and calling resource observer.
 
An extension to runnable.
A basic setting service that can be used for per-index and per-cluster settings.
Transactional interface to update settings.
This class is acting as a placeholder where a Field is its own ScriptFieldFactory as we continue to refactor each Field for source fallback.
Abstract base class for building queries based on script fields.
Abstract base MappedFieldType for runtime fields based on a script.
 
Base class for GeoShapeFieldMapper
 
Base implementation that throws an IOException for the DocIdSetIterator APIs.
Base implementation that throws an IOException for the DocIdSetIterator APIs.
Base implementation that throws an IOException for the DocIdSetIterator APIs.
Base implementation that throws an IOException for the DocIdSetIterator APIs.
 
Base class for synonyms retrieval actions, including GetSynonymsAction and GetSynonymsSetsAction.
Response class that (de)serializes a PagedResult.
Base request class that includes support for pagination parameters
Base class for action listeners that wrap another action listener and dispatch its completion to an executor.
AbstractThrottledTaskRunner runs the enqueued tasks using the given executor, limiting the number of tasks that are submitted to the executor at once.
 
 
This class models a cluster state update task that notifies an AcknowledgedResponse listener when all the nodes have acknowledged the cluster state update request.
An extension interface to ClusterStateUpdateTask that allows the caller to be notified after the master has computed, published, accepted, committed, and applied the cluster state update AND only after the rest of the nodes (or a specified subset) have also accepted and applied the cluster state update.
Identifies a cluster state update request with acknowledgement support
Abstract class that allows to mark action requests that support acknowledgements.
AcknowledgedRequest that does not have any additional fields.
Base request builder for master node operations that support acknowledgements
A response to an action which updated the cluster state, but needs to report whether any relevant nodes failed to apply the update.
Base class for the common case of a TransportMasterNodeAction that responds with an AcknowledgedResponse.
A filter allowing to filter transport actions
A simple base class for injectable action filters that spares the implementation from handling the filter chain.
A filter chain allowing to continue and process the transport action request
Holds the action filters injected through plugins, properly sorted by ActionFilter.order()
An extension to Future allowing for simplified "get" operations.
A listener for action responses or failures.
An adapter for handling transport responses using an ActionListener.
Builds and binds the generic action map, all TransportActions, and ActionFilters.
An exception indicating that a transport action was not found.
An additional extension point for Plugins that extends Elasticsearch's scripting functionality.
 
 
 
This class is similar to ActionRequestBuilder, except that it does not build the request until the request() method is called.
 
Base class for responses to action requests.
 
Base class for Runnables that need to call ActionListener.onFailure(Exception) in case an uncaught exception or error is thrown while the actual action is run.
An action invocation failure.
 
A class whose instances represent a value for counting the number of active shard copies for a given shard in an index.
This utility class provides a primitive for waiting for a configured number of shards to become active before sending a response on an ActionListener.
An Aggregator that delegates collection to another Aggregator and then translates its results into the results you'd expect from another aggregation.
Class representing statistics about adaptive replica selection.
Cluster state update request that allows to add a block to one or more indices
A request to add a block to an index.
Builder for add index block request
 
 
 
 
A request to add voting config exclusions for certain master-eligible nodes, and wait for these nodes to be removed from the voting configuration.
 
Administrative actions/operations against the cluster or the indices.
 
An aggregation.
Common xcontent fields that are shared among addAggregation
A factory that knows how to create an Aggregator of a specific type.
A rough count of the number of buckets that Aggregators built by this builder will contain per parent bucket used to validate sorts and pipeline aggregations.
Common xcontent fields shared among aggregator builders
Utility class to create aggregations.
Everything used to build and execute aggregations and the data sources that power them.
Implementation of AggregationContext for production usage that wraps our ubiquitous SearchExecutionContext and anything else specific to aggregations.
Collection of helper methods for what to throw in common aggregation error scenarios.
Used to preserve contextual information during aggregation execution.
Thrown when failing to execute an aggregation
 
 
Thrown when failing to execute an aggregation
Provides a set of static helpers to determine if a particular type of InternalAggregation "has a value" or not.
A path that can be used to sort/order buckets (in some multi-bucket aggregations, e.g.
 
Aggregation phase of a search request, used to collect aggregations
AbstractProfileBreakdown customized to work with aggregations.
 
A container class to hold the profile results for a single shard in the request.
Dependencies used to reduce aggs.
A AggregationReduceContext to perform the final reduction.
A AggregationReduceContext to perform a partial reduction.
 
A factory to construct stateful AggregationScript factories for a specific index.
A factory to construct AggregationScript instances.
 
 
 
An Aggregator.
Compare two buckets by their ordinal.
Parses the aggregation request and creates the appropriate aggregator factory for it.
Aggregation mode for sub aggregations.
Base implementation for concrete aggregators.
Collector that controls the life cycle of an aggregation document collection.
Collector manager that produces AggregatorCollector and merges them during the reduce phase.
An immutable collection of AggregatorFactories.
A mutable collection of AggregationBuilders and PipelineAggregationBuilders.
 
Interface for reducing aggregations to a single one.
Interface for reducing InternalAggregations to a single one in a streaming fashion.
Represents an alias, to be associated with an index
Individual operation to perform on the cluster state as part of an IndicesAliasesRequest.
Operation to add an alias to an index.
 
Validate a new alias.
Operation to remove an alias from an index.
 
Operation to remove an index.
 
Needs to be implemented by all ActionRequest subclasses that relate to one or more indices and one or more aliases.
Represents a QueryBuilder and a list of alias names that filters the builder is composed of.
 
 
 
Validator for an alias, to be used before adding an alias to the index metadata and make sure the alias is valid
Stats class encapsulating all of the different circuit breaker stats
Represents a executor node operation that corresponds to a persistent task
 
Allocates an unassigned empty primary shard to a specific node.
 
Allocates an unassigned replica shard to a specific node.
 
Allocates an unassigned stale primary shard to a specific node.
 
Represents the allocation decision by an allocator for an unassigned shard.
 
This event listener might be needed to delay execution of multiple distinct tasks until followup reroute is complete.
A command to move shards in some way.
A simple AllocationCommand composite managing several AllocationCommand implementations
AllocationDecider is an abstract base class that allows to make dynamic cluster- or index-wide shard allocation decisions on a per-node basis.
Combines the decision of multiple AllocationDecider implementations into a single allocation decision.
An enum which represents the various decision types that can be taken by the allocators and deciders for allocating a shard to a node.
Uniquely identifies an allocation.
This service manages the node allocation of a cluster.
this class is used to describe results of applying a set of AllocationCommand
 
 
 
Enum representing the mode in which token filters and analyzers are allowed to operate.
The basic factory interface for analysis components.
An additional extension point for Plugins that extends Elasticsearch's analysis functionality.
An internal registry for tokenizer, token filter, char filter and analyzer.
Statistics about analysis usage.
 
 
 
 
 
 
A request to analyze a text associated with a specific index.
 
 
 
A class that groups analysis components necessary to produce a custom analyzer.
Analyzers that provide access to their token filters should implement this
 
AnalyzerProvider<T extends org.apache.lucene.analysis.Analyzer>
 
 
See the EDSL examples at Binder.
Annotation utilities.
Thrown when an API is not available in the current environment.
A master node sends this request to its peers to inform them that it could commit the cluster state with the given term and version.
An implementation of ValueFetcher that knows how to extract values from the document source.
 
AssignmentDecision represents the decision made during the process of assigning a persistent task to a node of the cluster.
 
Describes an index that is part of the state of a SystemIndices.Feature, but is not protected or managed by the system.
A BiFunction-like interface designed to be used with asynchronous executions.
This async IO processor allows to batch IO operations and have a single writer processing the write operations.
Allows to asynchronously fetch shard related data from other nodes for allocation, without blocking the cluster update thread.
The result of a fetch operation.
A list backed by an AtomicReferenceArray with potential null values, easily allowing to get the concrete values as a list using AtomicArray.asList().
 
Api that auto creates an index or data stream that originate from requests that write into an index that doesn't yet exist.
 
Encapsulates the logic of whether a new index should be automatically created when a write operation is about to happen in a non existing index.
This class acts as a functional wrapper around the index.auto_expand_replicas setting.
Helper functions for creating various forms of AutomatonQuery
 
Represents an auto sharding recommendation.
Represents the type of recommendation the auto sharding service provided.
An aggregation that computes the average of the values in the current bucket.
 
 
 
This AllocationDecider controls shard allocation based on awareness key-value pairs defined in the node configuration.
Provides a backoff policy for bulk requests.
The BalancedShardsAllocator re-balances the nodes allocations within an cluster based on a BalancedShardsAllocator.WeightFunction.
Interface shared by AggregationBuilder and PipelineAggregationBuilder so they can conveniently share the same namespace for XContentParser.namedObject(Class, String, Object).
 
Base class for all broadcast operation based responses.
An abstract class that implements basic functionality for allocating shards to nodes based on shard copies that already exist in the cluster.
 
A base class for node level operations.
 
 
 
Abstract base class for allocating an unassigned primary shard to a node
 
Base handler for REST requests.
REST requests are handled by preparing a channel consumer that represents the execution of the request against a channel.
A base class for task requests
Base class for responses of task-related operations
 
A base class for all classes that allows reading ops from translog files
A replication request that has no more information than ReplicationRequest.
A BatchedRerouteService is a RerouteService that batches together reroute requests to avoid unnecessary extra reroutes.
 
 
A specialization of DeferringBucketCollector that collects all matches and then is able to replay a given subset of buckets which represent the survivors from a pruning process performed by the aggregator that owns this collector.
A specialization of DeferringBucketCollector that collects all matches and then replays only the top scoring documents to child aggregations.
Base abstraction of an array.
Utility class to work with arrays.
 
 
 
 
 
 
LeafFieldData impl on top of Lucene's binary doc values.
 
 
 
 
 
 
A range aggregator for values that are stored in SORTED_SET doc values.
 
 
 
 
Performs binary search on an arbitrary data structure.
 
Collects configuration information (primarily bindings) which will be used to create an Injector.
 
A mapping from a key (type and optional annotation) to the strategy for getting instances of the type.
Annotates annotations which are used for binding.
Bind a non-constant key.
 
Visits each of the strategies used to find an instance to satisfy an injection.
 
A bit array that is implemented using a growing LongArray created from BigArrays.
This is a cache for BitDocIdSet based filters and is unbounded by size or time.
A listener interface that is executed for each onCache / onRemoval event
 
 
BlendedTermQuery can be used to unify term statistics across one or more fields in the index.
An interface for managing a repository of blob entries, where each blob entry is just a named group of bytes.
 
Blob name and size.
The list of paths where a blob can reside.
An interface for storing blobs.
 
Shard snapshot metadata
Information about snapshotted file
Contains information about all snapshots for the given shard in repository
BlobStore - based implementation of Snapshot Repository
A reader that supports reading doc-values from a Lucene segment in Block fashion.
 
 
 
 
 
 
 
Convert from the stored long into the double to load.
Interface for loading data in a block shape.
 
Marker interface for block results.
Builds block "builders" for loading data into blocks for the compute engine.
 
A builder for typed values.
 
 
Implementation of BlockLoader.ColumnAtATimeReader and BlockLoader.RowStrideReader that always loads null.
 
A list of documents to load.
 
 
 
 
 
 
 
 
Loads values from _source.
Load booleans from _source.
Load BytesRefs from _source.
Load doubles from _source.
 
Load ints from _source.
 
Load longs from _source.
Loads values from IndexReader.storedFields().
Load BytesRef blocks from stored BytesRefs.
Load BytesRef blocks from stored Strings.
Load BytesRef blocks from stored Strings.
 
A field mapper for boolean fields.
 
 
 
 
 
 
 
BlockDocValuesReader implementation for boolean scripts.
 
 
 
 
 
 
 
A Query that matches documents matching boolean combinations of other queries.
The BoostingQuery class can be used to effectively demote results that match a given query.
Encapsulates a bootstrap check.
Encapsulate the result of a bootstrap check.
Context that is passed to every bootstrap check to make decisions on.
Exposes system startup information
 
Utilities for use during bootstrap.
A custom break iterator that is used to find break-delimited passages bounded by a provided maximum length in the UnifiedHighlighter context.
A class representing a Bounding-Box for use by Geo and Cartesian queries and aggregations that deal with extents/rectangles representing rectangular areas of interest.
 
A bounded transport address is a tuple of TransportAddress, one array that represents the addresses the transport is bound to, and the other is the published one that represents the address clients should communicate on.
Settings for a CircuitBreaker
 
 
 
An exception indicating that a failure occurred performing an operation on the shard.
 
 
A request that is broadcast to the unpromotable assigned replicas of a primary.
A script used in bucket aggregations that returns a double value.
 
A script used in bucket aggregations that returns a boolean value.
 
A Collector that can collect data in separate buckets.
 
Type specialized sort implementations designed for use in aggregations.
Callbacks for storing extra data along with competitive sorts.
 
Superclass for implementations of BucketedSort for double keys.
Superclass for implementations of BucketedSort for float keys.
Superclass for implementations of BucketedSort for long keys.
Used with BucketedSort.getValues(long, ResultBuilder) to build results from the sorting operation.
A set of static helpers to simplify working with aggregation buckets, in particular providing utilities that help pipeline aggregations.
A gap policy determines how "holes" in a set of buckets should be handled.
A parser for parsing requests for a BucketMetricsPipelineAggregator
 
A class of sibling pipeline aggregations which calculate metrics across the buckets of a sibling aggregation
 
 
Class for reducing a list of BucketReducer to a single InternalAggregations and the number of documents.
 
 
 
 
 
 
 
 
Helper functions for common Bucketing functions
Similar to Lucene's BufferedChecksumIndexInput, however this wraps a StreamInput so anything read will update the checksum
Similar to Lucene's BufferedChecksumIndexOutput, however this wraps a StreamOutput so anything written will update the checksum
Information about a build of Elasticsearch.
 
Allows plugging in current build info.
A version representing the code of Elasticsearch
Response used for actions that index many documents using a scroll request.
Task storing information about a currently running BulkByScroll request.
Status of the reindex, update by query, or delete by query.
This class acts as a builder for BulkByScrollTask.Status.
The status of a slice of the request.
 
Represents a single item response for an action executed as part of the bulk API.
Represents a failure.
An bulk operation listener for bulk events.
A bulk processor is a thread safe bulk processing class, allowing to easily set when to "flush" a new bulk request (either based on number of actions, based on the size, or time), and to easily control the number of concurrent bulk requests allowed to be executed in parallel.
A builder used to create a build an instance of a bulk processor.
A listener for the execution.
A bulk processor is a thread safe bulk processing class, allowing to easily set when to "flush" a new bulk request (either based on number of actions, based on the size, or time), and to easily control the number of concurrent bulk requests allowed to be executed in parallel.
A builder used to create a build an instance of a bulk processor.
A listener for the execution.
A bulk request holds an ordered IndexRequests, DeleteRequests and UpdateRequests and allows to execute it in a single batch.
A bulk request holds an ordered IndexRequests and DeleteRequests and allows to executes it in a single batch.
Implements the low-level details of bulk request handling
Helper to parse bulk requests.
A response of a bulk execution.
 
 
Bulk related statistics, including the time and size of shard bulk requests, starting at the shard level and allowing aggregation to indices and node level
Abstraction of an array of byte values.
Wraps array of bytes into IndexInput
Resettable StreamInput that wraps a byte array.
 
 
 
 
 
 
 
 
 
 
A SizeUnit represents size at a given unit of granularity and provides utility methods to convert across units.
 
Maps BytesRef bucket keys to bucket ordinals.
An iterator for buckets inside a particular owningBucketOrd.
Compact serializable container for ByteRefs
A reference to bytes.
 
Comparator source for string/binary values.
Specialized hash table implementation similar to Lucene's BytesRefHash that maps BytesRef values to ids.
used by ScriptSortBuilder to refer to classes in x-pack (eg.
 
 
 
A factory to construct stateful BytesRefSortScript factories for a specific index.
A factory to construct BytesRefSortScript instances.
A @link StreamOutput that is backed by a BytesRef.
 
A @link StreamOutput that uses BigArrays to acquire pages of bytes, which avoids frequent reallocation & copying of the internal data.
A specialized, bytes only request, that can potentially be optimized on the network layer, specifically for the same large buffer send to several nodes.
Utility methods to do byte-level encoding.
A simple concurrent cache.
 
 
A Supplier that caches its return value.
 
A command that cancels relocation, or recovery of a given shard on a node.
Allows an action to fan-out to several sub-actions and accumulate their results, but which reacts to a cancellation by releasing all references to itself, and hence the partially-accumulated results, allowing them to be garbage-collected.
A cache of a single object whose refresh process can be cancelled.
A task that can be cancelled
This interface is implemented by any class that needs to react to the cancellation of this task.
Tracks items that are associated with cancellable tasks, supporting efficient lookup by task ID and by parent task ID
A utility class for multi threaded operation that needs to be cancellable via interrupts.
 
 
 
A request to cancel tasks
Builder for the request to cancel tasks running on the specified nodes
 
Node-level request used during can-match phase
 
 
 
Shard-level response for can-match requests
An aggregation that computes approximate numbers of unique terms.
 
An aggregator that computes approximate counts of unique values.
 
 
 
Upper bound of how many owningBucketOrds that an Aggregator will have to collect into.
Lucene geometry query for BinaryShapeDocValuesField.
Utility class that converts geometries into Lucene-compatible form for indexing in a shape field.
 
A case insensitive term query.
A case insensitive wildcard query.
A ContextMapping that uses a simple string as a criteria The suggestions are boosted and/or filtered by their associated category (string) value.
Defines the query context for CategoryContextMapping
 
Use this progress listener for cross-cluster searches where a single coordinator is used for all clusters (minimize_roundtrips=false).
Class representing the long-encoded grid-cells belonging to the multi-value geo-doc-values.
Class representing the long-encoded grid-cells belonging to the singleton geo-doc-values.
Generic interface for both geographic and cartesian centroid aggregations.
This class keeps a running Kahan-sum of coordinates that are to be averaged in TriangleTreeWriter for use as the centroid of a shape.
 
only for testing until we have a disk-full FileSystem
 
 
A BiConsumer-like interface which allows throwing checked exceptions.
A BiFunction-like interface which allows throwing checked exceptions.
 
A Supplier-like interface which allows throwing checked exceptions.
Snapshot metadata file format used in v2.0 and above
Breaker that will check a parent's when incrementing
 
 
Base class for doing chunked writes to a blob store.
 
 
An OutputStream which Gzip-compresses the written data, Base64-encodes it, and writes it in fixed-size chunks to a logger.
The body of a rest response that uses chunked HTTP encoding.
An alternative to ToXContent allowing for progressive serialization by creating an Iterator of ToXContent chunks.
 
Chunked equivalent of ToXContentObject that serializes as a full object.
 
 
Interface for an object that can be incremented, breaking after some configured limit has been reached.
 
 
A class collecting trip count metrics for circuit breakers (parent, field data, request, in flight requests and custom child circuit breakers).
An extension point for Plugin implementations to add custom circuit breakers
Interface for Circuit Breaker services, which provide breakers to classes that load field data.
Class encapsulating stats about the circuit breaker
Exception thrown when the circuit breaker trips
 
Checked by scripting engines to allow loading a java class.
Combines an ActionListenerResponseHandler with an ActionListener.runAfter action, but with an explicit type so that tests that simulate reboots can release resources without invoking the listener.
 
 
 
 
 
 
 
 
 
A request to clear the voting config exclusions from the cluster state, optionally waiting for these nodes to be removed from the cluster first.
A client provides a one stop interface for performing actions/operations against the cluster.
A scrollable source of hits from a Client instance.
 
 
 
Abstract Transport.Connection that provides common close logic.
Cluster state update request that allows to close one or more indices
A request to close an index.
Builder for close index request
 
 
 
 
 
 
Administrative actions/operations against indices.
A request to explain the allocation of a shard in the cluster
Builder for requests to explain the allocation of a shard in the cluster
Explanation response for a shard in the cluster
A ClusterAllocationExplanation is an explanation of why a shard is unassigned, or if it is not unassigned, then which nodes it could possibly be relocated to.
 
 
 
 
 
 
 
 
 
 
 
 
Represents current cluster level blocks to block dirty operations done against the cluster.
 
 
An event received by the local node, signaling that the cluster state has changed.
This class manages node connections within a cluster.
 
 
 
 
Stores information on what features are present throughout the cluster
 
If this node believes that cluster formation has failed, this record provides information that can be used to determine why that is.
This class is used to fetch the ClusterFormationState from another node.
 
 
This transport action fetches the ClusterFormationState from a remote node.
 
Request to retrieve the cluster settings
Response for cluster settings
 
 
 
 
Pattern converter to format the cluster_id variable into JSON fields cluster.id.
 
ClusterInfo is an object representing a map of nodes to DiskUsage and a map of shard ids to shard sizes, see InternalClusterInfoService.shardIdentifierFromRouting(String) for the key used in the shardSizes map
Represents a data path on a node
 
Represents the total amount of "reserved" space on a particular data path, together with the set of shards considered.
 
 
 
Interface for a class used to gather information about a cluster periodically.
 
Configures classes and services that affect the entire cluster.
 
Resolves cluster names from an expression.
An extension point for Plugin implementations to customer behavior of cluster management.
This AllocationDecider controls re-balancing operations based on the cluster wide active shard state.
An enum representation for the configured re-balance type.
 
Request to submit cluster reroute allocation commands
Builder for a cluster reroute request
Response returned after a cluster reroute request
 
 
 
 
 
 
Encapsulates all valid cluster level settings.
 
 
Represents the state of the cluster, held in memory on all nodes in the cluster with updates coordinated by the elected master.
 
 
 
Interface that a cluster state update task can implement to indicate that it wishes to be notified when the update has been acked by (some subset of) the nodes in the cluster.
 
A component that is in charge of applying an incoming cluster state to the node internal data structures.
 
A listener to be notified when a cluster state changes.
A utility class which simplifies interacting with the cluster state in cases where one tries to take action based on the current state but may want to wait for a new state and retry upon failure.
 
Represents a cluster state update computed by the MasterService for publication to the cluster.
 
 
 
 
The response for getting the cluster state.
 
An executor for batches of cluster state update tasks.
Encapsulates the context in which a batch of tasks executes.
A task to be executed, along with callbacks for the executor to record the outcome of this task's execution.
 
Base class to be used when needing to update the cluster state Contains the basic fields that are always needed
 
Various statistics (timing information etc) about cluster state updates coordinated by this node.
A task that can update the cluster state.
 
 
 
 
 
 
 
 
 
 
 
A request to get cluster level stats.
 
 
 
Request for an update cluster settings action
Builder for a cluster update settings request
A response for a cluster update settings action.
Since Lucene 4.0 low level index segments are read and written through a codec layer that allows to use use-case specific file formats & data-structures per field.
A builder that enables field collapsing on search request.
Context used for field collapsing
Collections-related utility methods.
Public interface and serialization container for profiled timings of the Collectors used in the search.
A BitSet implementation that combines two instances of BitSet and Bits to provide a single merged view.
An IndexDeletionPolicy that coordinates between Lucene's commits and the retention of translog generation files, making sure that all translog files that are needed to recover from the Lucene commit are not deleted.
A query that matches on multiple text fields, as if the field contents had been indexed into a single combined field.
A rate limiter designed for multiple concurrent users.
 
a class the returns dynamic information with respect to the last commit point of this shard
 
Contains flags that can be used to regulate the presence and calculation of different stat fields in CommonStats.
 
 
Comparator-related utility methods.
Wraps component version numbers for cluster state
Used to calculate sums using the Kahan summation algorithm.
Mapper for completion field.
 
 
 
ActionType that is used by executor node to indicate that the persistent action finished or failed on the node and needs to be removed from the cluster state in case of successful completion or restarted on some other node in case of failure.
 
 
 
 
 
Suggestion response for CompletionSuggester results Response format for each entry: { "text" : STRING "score" : FLOAT "contexts" : CONTEXTS } CONTEXTS : { "CONTEXT_NAME" : ARRAY, ..
 
 
Defines a suggest command based on a prefix, typically to provide "auto-complete" functionality for users as they type search terms.
 
A TriangleTreeVisitor.TriangleTreeDecodedVisitor implementation for Component2D geometries.
A component template is a re-usable Template as well as metadata about the template.
ComponentTemplateMetadata is a custom Metadata implementation for storing a map of component templates and their names.
Represents a version number of a subsidiary component to be reported in node info
An index template consists of a set of index patterns, an optional template, and a list of ids corresponding to component templates that should be composed in order when creating a new index.
 
 
The ComposableIndexTemplateMetadata class is a custom Metadata.Custom implementation that stores a map of ids to ComposableIndexTemplate templates.
 
 
 
 
A composite BytesReference that allows joining multiple bytes references into one without copying.
A script that emits a map of multiple values, that can then be accessed by child runtime fields.
 
 
 
 
 
Marker interface that needs to be implemented by all ActionRequest subclasses that are composed of multiple sub-requests which relate to one or more indices.
A runtime field of type object.
 
 
 
A Processor that executes a list of other "processors".
Similar class to the String class except that it internally stores data using a compressed representation in order to require less permanent memory.
 
 
 
 
 
 
A Recycler implementation based on a concurrent Deque.
Similar to the ClusterRebalanceAllocationDecider this AllocationDecider controls the number of currently in-progress re-balance (relocation) operations and restricts node allocations if the configured threshold is reached.
Thrown when a user tries to multiple conflicting snapshot/restore operations at the same time.
Base class for rollover request conditions
Holder for evaluated condition result
Holder for index stats used to evaluate conditions
 
A wrapping processor that adds 'if' logic around the wrapped processor.
Thrown when a programming error such as a misplaced annotation, illegal binding, or unsupported scope is found.
 
 
 
 
 
A connection profile describes how many connection are established to specific node for each of the available request types.
A builder to build a new ConnectionProfile
 
Used to publish secure setting hashes in the cluster state and to validate those hashes against the local values of those same settings.
 
Dynamically loads an "AnsiPrintStream" from the jANSI library on a separate class loader (so that the server classpath does not need to include jansi.jar)
 
Outputs a very short version of exceptions for an interactive console, pointing to full log for details.
A MappedFieldType that has the same value for all documents.
 
 
A query that wraps a filter and simply returns a constant score equal to the query boost for every document in the filter.
Context of a dependency construction.
A binding to the constructor of a concrete clss.
 
Builder for ContextMapping
Context-aware extension of IndexSearcher.
A ContextMapping defines criteria that can be used to filter and/or boost suggestions at query time for CompletionFieldMapper.
 
 
ContextMappings indexes context-enabled suggestion fields and creates context queries for defined ContextMappings for a CompletionFieldMapper
Restores the given ThreadContext.StoredContext once the listener is invoked
Asynchronously runs some computation using at most one thread but expects the input value changes over time as it's running.
A binding created from converting a bound instance to a new type.
Abstract API for classes that help encode double-valued spatial coordinates x/y to their integer-encoded serialized form and decode them back
This action exposes CoordinationDiagnosticsService#diagnoseMasterStability so that a node can get a remote node's view of coordination diagnostics (including master stability).
 
 
This transport action calls CoordinationDiagnosticsService#diagnoseMasterStability
This service reports the health of master stability.
 
 
 
 
 
 
A collection of persistent node ids, denoting the voting configuration for cluster state changes.
The core class of the cluster state coordination algorithm, directly implementing the formal model
Pluggable persistence layer for CoordinationState.
A collection of votes, used to calculate quorums.
This exception is thrown when rejecting state transitions on the CoordinationState object, for example when receiving a publish request with the wrong term or version.
 
 
 
Context object used to rewrite QueryBuilder instances into simplified version in the coordinator.
 
This map is designed to be constructed from an immutable map and be copied only if a (rare) mutation operation occurs.
CoreValuesSourceType holds the ValuesSourceType implementations for the core aggregations package.
 
This exception is thrown when Elasticsearch detects an inconsistency in one of it's persistent states.
This file is forked from the https://netty.io project.
 
 
A simple thread-safe count-down class that does not block, unlike a CountDownLatch.
Wraps another listener and adds a counter -- each invocation of this listener will decrement the counter, and when the counter has been exhausted the final invocation of this listener will delegate to the wrapped listener.
A CountedBitSet wraps a FixedBitSet but automatically releases the internal bitset when all bits are set to reduce memory usage.
 
A CounterMetric is used to track the number of completed and outstanding items, for example, the number of executed refreshes, the currently used memory by indexing, the current pending search requests.
Simple usage stat counters based on longs.
A reusable @link StreamOutput that just count how many bytes are written.
 
 
Cluster state update request that allows to create an index
A request to create an index.
Builder for a create index request
A response for a create index action.
Create snapshot action
Create snapshot request
Create snapshot request builder
Create snapshot response
Thrown when errors occur while creating a Injector.
CtxMap<T extends Metadata>
A scripting ctx map with metadata for write ingest contexts.
An approximate set membership datastructure CuckooFilters are similar to Bloom Filters in usage; values are inserted, and the Cuckoo can be asked if it has seen a particular value before.
 
 
 
A custom analyzer that is built out of a single Tokenizer and a list of TokenFilters.
 
 
A field visitor that allows to load a selection of the stored fields by exact name.
Pattern converter to populate CustomMapFields in a pattern.
A custom normalizer that is built out of a char and token filters.
Custom passage formatter that allows us to: 1) extract different snippets (instead of a single big string) together with their scores (Snippet) 2) use the Encoder implementations that are already used with the other highlighters
 
Custom field that allows storing an integer value as a term frequency in lucene.
Subclass of the UnifiedHighlighter that works for a single field in a single document.
Contains information about a dangling index, i.e.
The dangling indices state is responsible for finding new dangling indices (indices that have their state written on disk, but don't exists in the metadata of the cluster).
 
Context object used to rewrite QueryBuilder instances into simplified version on the datanode where the request is going to be executed.
 
 
Operations on data streams.
 
 
Represents the last auto sharding event that occured for a data stream.
Calculates the optimal number of shards the data stream write index should have based on the indexing load.
A utility class that contains the mappings and settings logic for failure store indices that are a part of data streams.
A cluster state entry that contains global retention settings that are configurable by the user.
Holds the data stream lifecycle metadata that are configuring how a data stream is managed.
This builder helps during the composition of the data stream lifecycle templates.
Downsampling holds the configuration about when should elasticsearch downsample a backing index.
A round represents the configuration for when and how elasticsearch will downsample a backing index.
Retention is the least amount of time that the data will be kept by elasticsearch.
This enum represents all configuration sources that can influence the retention of a data stream.
Represents the data stream lifecycle information that would help shape the functionality's health.
Custom Metadata implementation for storing a map of DataStreams and their names.
 
 
 
 
 
 
FieldMapper for the data-stream's timestamp meta-field.
 
 
The DataTier class encapsulates the formalization of the "content", "hot", "warm", and "cold" tiers as node roles.
This setting provider injects the setting allocating all newly created indices with index.routing.allocation.include._tier_preference: "data_hot" for a data stream index or index.routing.allocation.include._tier_preference: "data_content" for an index not part of a data stream unless the user overrides the setting while the index is being created (in a create index request for instance)
A FieldMapper for dates.
 
 
 
 
 
 
 
Temporary parse method that takes into account the date format.
 
 
A builder for histograms on date fields.
 
 
The interval the date histogram is based on.
A SingleDimensionValuesSource for date histogram values.
A CompositeValuesSourceBuilder that builds a RoundingValuesSource from a Script or a field name using the provided interval.
 
A shared interface for aggregations that parse and use "interval" parameters.
A class that handles all the parsing, bwc and deprecations surrounding date histogram intervals.
 
An abstraction over date math parsing.
 
 
 
 
A simple wrapper class that indicates that the wrapped query has made use of NOW when parsing its datemath.
BlockDocValuesReader implementation for date scripts.
 
 
 
 
 
Implement this interface to provide a decay function that is executed on a distance.
 
This is the base class for scoring a single field.
Parser used for all decay functions, one instance each.
This abstract class defining basic Decision used during shard allocation process.
Simple class representing a list of decisions
Simple class representing a single decision
This enumeration defines the possible types of decisions
Inspects token streams for duplicate sequences of tokens.
 
The default rest channel for incoming requests.
 
 
A BucketCollector that records collected doc IDs and buckets and allows to replay a subset of the collected buckets.
 
Compressor implementation based on the DEFLATE compression algorithm.
 
A holder for Writeables that delays reading the underlying object on the receiving end.
A Writeable stored in serialized form backed by a ReleasableBytesReference.
The DelayedAllocationService listens to cluster state changes and checks if there are unassigned shards with delayed allocation (unassigned shards that have the delay marker).
A wrapper around reducing buckets with the same key that can delay that reduction as long as possible.
Class for reducing a list of DelayedBucketReducer to a single InternalAggregations and the number of documents in a delayable fashion.
An exception marking that this recovery attempt should be ignored (since probably, we already recovered).
A default Field to provide ScriptDocValues for fields that are not supported by the script fields api.
A wrapper around an ActionListener L that by default delegates failures to L's ActionListener.onFailure(java.lang.Exception) method.
 
Creates a new DeleteByQueryRequest that uses scrolling and bulk requests to delete all documents matching the query.
 
Represents a request to delete a particular dangling index, specified by its UUID.
 
 
Cluster state update request that allows to close one or more indices
A request to delete an index.
 
A request to delete an index template.
 
 
 
 
Unregister repository request.
Builder for unregister repository request
A request to delete a document from an index based on its type and id.
A delete document action request builder.
The response of the delete action.
Builder class for DeleteResponse.
The result of deleting multiple blobs from a BlobStore.
Delete snapshot request
Delete snapshot request builder
 
 
 
 
 
 
Provides the denormalized vectors.
DenseVector value type for the painless.
 
A FieldMapper for indexing a dense vector of floats.
 
 
 
Holds enhanced stats about a dense vector mapped field.
 
 
Statistics about indexed dense vector
A variable that can be resolved by an injector.
A logger message used by DeprecationLogger, enriched with fields named following ECS conventions.
Deprecation log messages are categorised so that consumers of the logs can easily aggregate them.
A logger that logs deprecation notices.
DeprecationRestHandler provides a proxy for any existing RestHandler so that usage of the handler can be logged using the DeprecationLogger.
A Recycler implementation based on a Deque.
The desired balance of the cluster, indicating which nodes should hold a copy of each shard.
Holds the desired balance and updates it as the cluster evolves.
The input to the desired balance computation.
Given the current allocation of shards and the desired balance, performs the next (legal) shard movements towards the goal.
 
 
 
 
 
A ShardsAllocator which asynchronously refreshes the desired balance held by the DesiredBalanceComputer and then takes steps towards the desired balance using the DesiredBalanceReconciler.
 
 
 
 
Desired nodes represents the cluster topology that the operator of the cluster is aiming for.
 
 
 
 
Helper for dealing with destructive operations and wildcard usage.
 
 
DFS phase of a search request, used to make scoring 100% accurate by collecting additional info from each shard before the query phase.
 
This class collects profiling information for the dfs phase and generates a ProfileResult for the results of the timing information for statistics collection.
 
 
Details a potential issue that was diagnosed by a HealthService.
Details a diagnosis - cause and a potential action that a user could take to clear an issue identified by a HealthService.
Represents a type of affected resource, together with the resources/abstractions that are affected.
 
Represents difference between states of cluster state parts
Cluster state part, changes in which can be serialized
This is a Map<String, String> that implements AbstractDiffable so it can be used for cluster state purposes
Represents differences between two DiffableStringMaps.
 
Implementation of the ValueSerializer that wraps value and diff readers.
Serializer for Diffable map values.
Provides read and write operations to serialize keys of map
Represents differences between two maps of objects and is used as base class for different map implementations.
Serializer for non-diffable map values
Implementation of ValueSerializer that serializes immutable sets
Provides read and write operations to serialize map values.
Like ShapeType but has specific types for when the geometry is a GeometryCollection and more information about what the highest-dimensional sub-shape is.
 
 
 
 
 
 
 
This attribute can be used to indicate that the PositionLengthAttribute should not be taken in account in this TokenStream.
Default implementation of DisableGraphAttribute.
A module for loading classes for node discovery.
A discovery node represents a node that is part of the cluster.
 
 
Represents a node role.
This class holds all DiscoveryNode in the cluster and provides convenience methods to access, modify merge / diff discovery nodes.
 
 
An additional extension point for Plugins that extends Elasticsearch's discovery functionality.
 
This indicator reports the clusters' disk health aka if the cluster has enough available space to function.
The health status of the disk space of this node along with the cause.
 
Determines the disk health of this node by checking if it exceeds the thresholds defined in the health metadata.
 
The DiskThresholdDecider checks that the node a shard is potentially being allocated to has enough disk space.
Listens for a node to go over the high watermark and kicks off an empty reroute if it does.
A container to keep settings for disk thresholds up to date with cluster setting changes.
Encapsulation class used to represent the amount of disk used on a node.
A query that generates the union of documents produced by its sub-queries, and that scores each document with the maximum score for that document as produced by any sub-query, plus a tie breaking increment for any additional matching sub-queries.
A query to boost scores based on their proximity to the given origin for date, date_nanos and geo_point field types
 
The DistanceUnit enumerates several units for measuring distances.
This class implements a value+unit tuple.
 
 
 
Alternative, faster implementation for converting String keys to longs but with the potential for hash collisions.
 
 
 
 
Mapper for the doc_count field.
 
An implementation of a doc_count provider that reads the value of the _doc_count field in the document.
A SliceQuery that partitions documents based on their Lucene ID.
Access the document in a script, provides both old-style, doc['fieldname'], and new style field('fieldname') access to the fields.
 
Collects dimensions from documents.
Makes sure that each dimension only appears on time.
A single field name and values part of SearchHit and GetResult.
 
 
A parser for documents
Context used when parsing incoming documents.
An exception thrown during document parsing Contains information about the location in the document where the error was encountered
An interface to provide instances of document parsing observer and reporter
An internal plugin that will return a DocumentParsingProvider.
An interface to allow wrapping an XContentParser and observe the events emitted while parsing A default implementation returns a noop DocumentSizeObserver
An interface to allow performing an action when parsing has been completed and successful
 
 
Value fetcher that loads from doc values.
A formatter for values as returned by the fielddata/doc-values APIs.
Singleton, stateless formatter, for representing bytes as base64 strings
Stateless, Singleton formatter for boolean values.
 
 
Singleton, stateless formatter for geo hash values
 
Stateless, singleton formatter for IP address data
Singleton, stateless formatter for "Raw" values, generally taken to mean keywords and other strings.
DocValues format for time series id.
DocValues format for unsigned 64 bit long values, that are stored as shifted signed 64 bit long values.
Provide access to DocValues for script field api and doc API.
 
This interface is used to mark classes that generate both Field and ScriptDocValues for use in a script.
A SliceQuery that uses the numeric doc values of a field to do the slicing.
Generic interface to group ActionRequest, which perform writes to a single document Action requests implementing this can be part of BulkRequest
Requested operation type to perform on the document
A base class for the response of a write operation that involves a single doc
Base class of all DocWriteResponse builders.
An enum that represents the results of CRUD operations, primarily used to communicate the type of operation that occurred.
Abstraction of an array of double values.
A monotonically increasing double based on a callback.
Represent hard_bounds and extended_bounds in histogram aggregations.
A monotonically increasing metric that uses a double.
 
 
 
 
 
Record non-additive double values based on a callback.
Record arbitrary values that are summarized statistically, useful for percentiles and histograms.
BlockDocValuesReader implementation for double scripts.
 
 
 
 
 
 
 
 
 
Result of the TermsAggregator when the field is some kind of decimal number like a float, double, or distance.
 
A counter that supports decreasing and increasing values.
Comparator source for double values.
A custom script that can be used for various DoubleValue Lucene operations.
A factory to construct DoubleValuesScript instances.
 
 
 
This class holds the configuration details of a DownsampleAction that downsamples time series (TSDB) indices.
This class contains the high-level logic that drives the rollup job.
Drop processor only returns null for the execution result to indicate that any document executed by it should not be indexed.
 
Represents a reduced view into an ErrorEntry, removing the exception message and last occurrence timestamp as we could potentially send thousands of entries over the wire and the omitted fields would not be used.
A Trie structure for analysing byte streams for duplicate sequences.
Provides statistics useful for detecting duplicate sections of text
 
An implementation of log4j2's ContextDataProvider that can be configured at runtime (after being loaded by log4j's init mechanism).
Defines a MappedFieldType that exposes dynamic child field types If the field is named 'my_field', then a user is able to search on the field in both of the following ways: - Using the field name 'my_field', which will delegate to the field type as usual.
DynamicMap is used to wrap a Map for a script parameter.
 
 
The type of a field as detected while parsing a json document.
This is a wrapper class around co.elastic.logging.log4j2.EcsLayout in order to avoid a duplication of configuration in log4j2.properties
 
Elasticsearch codec as of 8.14.
Used to indicate that the authentication process encountered a server-side error (5xx) that prevented the credentials verification.
 
This exception is thrown when Elasticsearch detects an inconsistency in one of it's persistent files.
A FilterDirectoryReader that exposes Elasticsearch internal per shard / index information like the shard ID.
A base class for all elasticsearch exceptions.
A generic exception indicating failure to generate.
A FilterLeafReader that exposes Elasticsearch internal per shard / index information like the shard ID.
 
 
 
Unchecked exception that is translated into a 400 BAD REQUEST error when it bubbles out over HTTP.
Helper class to determine if the ES process is shutting down
Utility class to safely share ElasticsearchDirectoryReader instances across multiple threads, while periodically reopening.
This exception is thrown to indicate that the access has been denied because of role restrictions that an authenticated subject might have (e.g.
Generic security exception
Exception who's RestStatus is arbitrary rather than derived.
The same as TimeoutException simply a runtime one.
An exception that is meant to be "unwrapped" when sent back to the user as an error because its is cause, if non-null is always more useful to the user than the exception itself.
It's provably impossible to guarantee that any leader election algorithm ever elects a leader, but they generally work (with probability that approaches 1 over time) as long as elections occur sufficiently infrequently, compared to the time it takes to send a message to another node and receive a response back.
Allows plugging in a custom election strategy, restricting the notion of an election quorum.
Contains a result for whether a node may win an election and the reason if not.
A core component of a module or injector.
Exposes elements of a module so they can be inspected or validated.
Visit elements.
ClusterInfoService that provides empty maps for disk usage and shard sizes
 
A script Field with no mapping, always returns defaultValue.
 
This class defines an empty task settings object.
 
Allocation values or rather their string representation to be used used with EnableAllocationDecider.CLUSTER_ROUTING_ALLOCATION_ENABLE_SETTING / EnableAllocationDecider.INDEX_ROUTING_ALLOCATION_ENABLE_SETTING via cluster / index settings.
Rebalance values or rather their string representation to be used used with EnableAllocationDecider.CLUSTER_ROUTING_REBALANCE_ENABLE_SETTING / EnableAllocationDecider.INDEX_ROUTING_REBALANCE_ENABLE_SETTING via cluster / index settings.
EnableAssignmentDecider is used to allow/disallow the persistent tasks to be assigned to cluster nodes.
Allocation values or rather their string representation to be used used with EnableAssignmentDecider.CLUSTER_TASKS_ALLOCATION_ENABLE_SETTING via cluster settings.
 
 
 
 
 
 
 
 
 
 
 
A throttling class that can be activated, causing the acquireThrottle method to block on a lock when throttling is enabled
 
A Lock implementation that always allows the lock to be acquired
 
 
 
type of operation (index, delete), subclasses use static types
Captures the result of a refresh operation on the index shard.
Base class for index and delete operation results Holds result meta data (e.g.
 
 
 
 
 
Called for each new opened engine reader to warm new segments
 
An exception indicating that an Engine creation failed.
 
Simple Engine Factory
A plugin that provides alternative engine implementations.
The environment of where things exists.
A cli command which requires an Environment to use current paths and settings.
Represents the recorded error for an index that Data Stream Lifecycle Service encountered.
A collection of error messages.
Indicates that a result could not be returned while preparing or resolving a binding.
Based on Lucene 9.0 postings format, which encodes postings in packed integer blocks for fast decode.
Holds all state required for ES812PostingsReader to produce a PostingsEnum without re-seeking the terms dict.
 
 
 
 
 
 
 
 
Writes quantized vector values and metadata to index segments.
This implementation is forked from Lucene's BloomFilterPosting to support on-disk bloom filters.
This implementation is forked from Lucene's BloomFilterPosting to support on-disk bloom filters.
Implementation of the MurmurHash3 128-bit hash functions.
This class provides encoding and decoding of doc values using the following schemes: delta encoding: encodes numeric fields in such a way to store the initial value and the difference between the initial value and all subsequent values.
 
 
 
Cache helper that allows swapping in implementations that are different to Lucene's IndexReader.CacheHelper which ties its lifecycle to that of the underlying reader.
 
Implementation of ESCacheHelper that wraps an IndexReader.CacheHelper.
 
 
A collection of static methods to help create different ES Executor types.
 
Deprecated.
ECSJsonlayout should be used as JSON logs layout
 
 
 
A base class for custom log4j logger messages.
 
 
An extension to thread pool executor, allowing (in the future) to add specific additional stats to it.
A ToParentBlockJoinQuery that allows to retrieve its nested path.
An EvictingQueue is a non-blocking queue which is limited to a maximum size; when new elements are added to a full queue, elements are evicted from the head of the queue to accommodate the new elements.
Exact knn query builder.
 
ExecutorBuilder<U extends org.elasticsearch.threadpool.ExecutorBuilder.ExecutorSettings>
Base class for executor builders.
A class that gathers the names of thread pool executors that should be used for a particular system index or system data stream.
Some operations need to use different executors for different index patterns.
Searches for, and allocates, shards for which there is an existing on-disk copy somewhere in the cluster.
Constructs a query that only match on documents that the field has a value in them.
 
To be implemented by ScoreScript which can provided an Explanation of the score This is currently not used inside elasticsearch but it is used, see for example here: https://github.com/elastic/elasticsearch/issues/8561
Encapsulates the information that describes an index from its data stream lifecycle perspective.
Explains the scoring calculations for the top hits.
Explain request encapsulating the explain query and document identifier to get an explanation for.
A builder for ExplainRequest.
Response containing the score explanation.
 
 
 
 
Holds a value that is either: a) set implicitly e.g.
 
Implements exponentially weighted moving averages (commonly abbreviated EWMA) for a single value.
Statistics over a set of values (either aggregated over field data or scripts)
 
 
 
Extended Statistics over a set of buckets
 
 
 
 
An extension point for Plugin implementations to be themselves extensible.
 
A registry of Extensible interfaces/classes read from extensibles.json file.
A utility for loading SPI extensions.
Object representing the extent of a geometry object within a TriangleTreeWriter.
Lazily creates (and caches) values for keys.
 
A class representing a failed shard.
Thrown when a cluster state publication fails to commit the new cluster state.
Exception indicating that one or more requested indices are failure indices.
Transforms an indexing request using error information into a new index request to be stored in a data stream's failure store.
Specialized class for randomly sampling values from the geometric distribution
 
Reads and consolidate features exposed by a list FeatureSpecification, grouping them into historical features and node features for the consumption of FeatureService
A utility class for registering feature flags in Elasticsearch code.
This class specifies features for the features functionality itself.
Holds the results of the most recent attempt to migrate system indices.
 
Manages information on the features supported by nodes in the cluster.
Specifies one or more features that are supported by this node.
Encapsulates state required to execute fetch phases
All the required context to pull a field from the doc values.
Fetch sub phase which pulls data from doc values.
The context needed to retrieve fields.
A fetch sub-phase for high-level field retrieval.
This action retrieves all the HealthInfo data from the health node.
 
 
 
Fetch phase of a search request, used to fetch the actual top matching documents to be returned to the client, identified after reducing all of the matches returned by the query phase
 
 
 
 
Context used to fetch the _source.
 
Sub phase within the fetch phase used to fetch things *about* the documents like highlighting or matched queries.
 
Executes the logic for a FetchSubPhase against a particular leaf reader and hit
 
A field in a document accessible via scripting.
A mapper for field aliases.
 
 
 
Wrapper around a field name and the format that should be used to display values of this field.
Describes the capabilities of a field optionally merged across multiple indices.
 
 
 
 
Response for FieldCapabilitiesRequest requests.
Used by all field data based aggregators.
Utility methods, similar to Lucene's DocValues.
Holds context information for the construction of FieldData
 
The global ordinal stats.
 
A helper class to FetchFieldsPhase that's initialized with a list of field patterns to fetch.
 
 
 
A Builder for a ParametrizedFieldMapper
 
Represents a list of fields with optional boost factor where the current field should be copied to
Creates mappers for fields that can act as time-series dimensions.
 
 
 
A configurable parameter for a field mapper
Serializes a parameter
Check on whether or not a parameter should be serialized
TypeParser implementation that automatically handles parsing
 
A reusable class to encode field -&gt; memory size mappings
A mapper that indexes the field names of a document under _field_names.
 
 
Stored fields visitor which provides information about the field names that will be requested
Filter for visible fields.
 
A script to produce dynamic values for return fields.
 
A factory to construct FieldScript instances.
Holds stats about the content of a script
A sort builder to sort based on a document field.
Holds stats about a mapped field.
Base StoredFieldVisitor that retrieves all non-redundant metadata.
 
 
 
 
 
 
 
Wraps a DirectoryReader and tracks all access to fields, notifying a FieldUsageTrackingDirectoryReader.FieldUsageNotifier upon access.
 
A function_score function that multiplies the score with the value of a field from the document, optionally multiplying the field by a factor first, and applying a modification (log, ln, sqrt, square, etc) afterwards.
The Type class encapsulates the modification types that can be applied to the score/value product.
Builder to construct field_value_factor functions for a function score query.
Represents values for a given document
An implementation of SeedHostsProvider that reads hosts/ports from FileBasedSeedHostsProvider.UNICAST_HOSTS_FILE.
Listener interface for the file watching service.
Callback interface that file changes File Watcher is using to notify listeners about changes.
 
This context will execute a file restore of the lucene files.
File based settings applier service which watches an 'operator` directory inside the config directory.
This class provides utility methods for calling some native methods related to filesystems.
Elasticsearch utils to work with Path
File resources watcher The file watcher checks directory and all its subdirectories for file changes and notifies its listeners accordingly
A filter aggregation.
A frequency TermsEnum that returns frequencies derived from a collection of cached leaf termEnums.
 
 
This AllocationDecider control shard allocation by include and exclude filters via dynamic cluster and index routing settings.
A blob container that by default delegates all methods to an internal BlobContainer.
Collects results by running each filter against the searcher and doesn't build any LeafBucketCollectors which is generally faster than FiltersAggregator.Compatible but doesn't support when there is a parent aggregator or any child aggregators.
Builds FilterByFilterAggregator when the filters are valid and it would be faster than a "native" aggregation implementation.
A Client that contains another Client which it uses as its basic source, possibly transforming the requests / responses along the way or providing additional functionality.
 
This is NOT a simple clone of the SearchExecutionContext.
 
 
IndexOutput that delegates all calls to another IndexOutput
 
 
A multi bucket aggregation where the buckets are defined by a set of filters (a bucket per filter).
A bucket associated with a specific filter (identified by its key)
 
Aggregator for filters.
 
 
A script implementation of a query filter.
A factory to construct stateful FilterScript factories for a specific index.
A factory to construct FilterScript instances.
 
Wraps a StreamInput and delegates to it.
Context for finalizing a snapshot.
 
Models a response to a FindDanglingIndexRequest.
A builder for fixed executors.
Class for reducing many fixed lists of FixedMultiBucketAggregatorsReducer to a single reduced list.
 
A field mapper that accepts a JSON object and flattens it into a single field.
 
A field data implementation that gives access to the values associated with a particular JSON key.
 
A field type that represents the values under a particular JSON key, used when searching under a specific key as in 'my_flattened.key: some_value'.
A field type that represents all 'root' values.
 
Abstraction of an array of double values.
 
Comparator source for float values.
 
 
 
A flush request to flush one or more indices.
 
 
The FollowersChecker is responsible for allowing a leader to check that its followers are still connected and healthy.
 
 
A request to force merging the segments of one or more indices.
 
A request to force merge one or more indices.
 
 
 
Simple helper class for FastVectorHighlighter FragmentsBuilder implementations.
A frequency terms enum that maintains a cache of docFreq, totalTermFreq, or both for repeated term lookup.
Execute an action at most once per time interval
A file system based implementation of BlobContainer.
 
 
Runs periodically and attempts to create a temp file to see if the filesystem is writable.
 
 
 
 
 
Shared file system implementation of the BlobStoreRepository
 
A query that allows for a pluggable boost function / filter.
 
 
A query that uses a filters with a script associated with them to compute the score.
Function to be associated with an optional filter, meaning it will be executed only for the documents that match the given filter.
 
A unit class that encapsulates all in-exact search parsing and conversion from similarities to edit distances etc.
Fuzzy options for completion suggester
Options for fuzzy queries
A Query that does fuzzy matching for a specific value.
 
Loads (and maybe upgrades) cluster metadata at startup, and persistently stores cluster metadata for future restarts.
Encapsulates the incremental writing of metadata to a PersistedClusterStateService.Writer.
 
 
 
 
Deprecated.
Use ScriptException for exceptions from the scripting engine, otherwise use a more appropriate exception (e.g.
Marker interface that allows specific NamedWritable objects to be serialized as part of the generic serialization in StreamOutput and StreamInput.
This class can parse points from XContentParser and supports several formats:
A class representing a Geo-Bounding-Box for use by Geo queries and aggregations that deal with extents/rectangles representing rectangular areas of interest.
 
Creates a Lucene query that will filter for all documents that lie within the specified bounding box.
An aggregation that computes a bounding box in which all documents of the current bucket are.
 
 
Interface for GeoCentroidAggregator
 
A ContextMapping that uses a geo location/area as a criteria.
 
Geo distance calculation.
 
 
 
Filter results of a query to include only those within a specific distance to some geo point.
 
A geo distance based sorting on a geo point like field.
Output formatters for geo fields support extensions such as vector tiles.
Defines an extension point for geometry formatter
A geo-grid aggregation.
A bucket that is associated with a geo-grid cell.
 
 
Aggregates data expressed as longs (for efficiency's sake) but formats results as aggregation-specific strings.
 
Filters out geohashes using the provided bounds at the provided precision.
CellIdSource implementation for Geohash aggregation
 
Aggregates data expressed as GeoHash longs (for efficiency's sake) but formats results as Geohash strings.
 
Utility class for converting libs/geo shapes to and from GeoJson
A reusable Geometry doc value reader for a previous serialized Geometry using GeometryDocValueWriter.
This is a tree-writer that serializes a list of IndexableField as an interval tree into a byte array.
Script producing geometries.
 
 
 
Output formatters supported by geometry fields.
Utility class for binary serializtion/deserialization of libs/geo classes
Transforms provided Geometry into a lucene friendly format by normalizing latitude and longitude coordinates and breaking geometries that cross the dateline.
An utility class with to read geometries from a XContentParser or generic object.
Supported formats to read/write JSON geometries.
 
 
Field Mapper for geo_point types.
 
 
Script producing geo points.
 
 
 
 
 
 
 
 
 
 
Per-document geo-point values.
Deprecated.
Defines the query context for GeoContextMapping
 
FieldMapper for indexing LatLonShapes.
 
 
Utility class that converts geometries into Lucene-compatible form for indexing in a geo_shape field.
 
Implemented by MappedFieldType that support GeoShape queries.
Derived AbstractGeometryQueryBuilder that builds a lat, lon GeoShape Query.
Filters out tiles using the provided bounds at the provided precision.
CellIdSource implementation for GeoTile aggregation
 
Aggregates data expressed as geotile longs (for efficiency's sake) but formats results as geotile strings.
 
 
 
Implements geotile key hashing, same as used by many map tile implementations.
 
Represents the point of the geohash cell that should be used as the value of geohash
This enum is used to determine how to deal with invalid geo coordinates in geo related queries: On STRICT validation invalid coordinates cause an exception to be thrown.
 
 
 
 
Action to retrieve one or more component templates
Request that to retrieve one or more component templates
 
 
Request that to retrieve one or more index templates
 
 
 
 
 
Encapsulates the configured properties we want to display for each backing index.
 
 
 
 
 
Action for getting a feature upgrade status.
Request for whether system features need to be upgraded
A response showing whether system features need to be upgraded and, feature by feature, which indices need to be upgraded.
A class for a particular feature, showing whether it needs to be upgraded and the earliest Elasticsearch version used to create one of this feature's system indices.
A data class that holds an index name and the version of Elasticsearch with which that index was created
 
 
 
Request the mappings of specific fields Note: there is a new class with the same name for the Java HLRC that uses a typeless format.
A helper class to build GetFieldMappingsRequest objects
Response object for GetFieldMappingsRequest API Note: there is a new class with the same name for the Java HLRC that uses a typeless format.
 
 
 
 
 
 
A request to retrieve information about an index.
 
 
A response for a get index action.
 
Request that allows to retrieve index templates
 
 
 
 
 
 
 
 
 
 
 
Get repositories action
Get repository request
Get repository request builder
Get repositories response
A request to get a document (its source) from an index based on its id.
A get document action request builder.
The response of a get action.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
Get snapshots action
Get snapshot request
Get snapshots request builder
Get snapshots response
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
A request to get node tasks
Builder for the request to retrieve the list of tasks running on the specified nodes
Returns the list of tasks currently running on the nodes
A global aggregation.
 
 
 
Represents a collection of global checkpoint listeners.
A global checkpoint listener consisting of a callback that is notified when the global checkpoint is updated or the shard is closed.
Background global checkpoint sync action initiated when a shard goes inactive.
 
 
An aggregator that computes approximate counts of unique values using global ords.
An interface global ordinals index field data instances can implement in order to keep track of building time costs.
Utility class to build global ordinals.
Concrete implementation of IndexOrdinalsFieldData for global ordinals.
An aggregator of string values that relies on global ordinals in order to build buckets.
 
 
 
This exception is thrown when there is something wrong with the structure of the graph (such as the graph of pipelines) to be applied to a document.
An action listener that delegates its results to another listener once it has received N results (either successes or failures).
This class implements a compilation of ShardIterators.
The entry point to the Guice framework.
 
A TransportAction which, on creation, registers a handler for its own TransportAction.actionName with the transport service.
Tracks how long message handling takes on a transport thread as a histogram with fixed buckets.
 
 
 
This is used to pack the validation exception with the associated header.
This is a simplistic logger that adds warning messages to HTTP headers.
 
This class tracks the health api calls and counts the statuses that have been encountered along with the unhealthy indicators and diagnoses.
This class collects the stats of the health API from every node
 
 
 
 
Performs the health api stats operation.
 
 
This class provides helper methods to construct display messages for the health indicators.
 
 
This is a service interface used to calculate health indicator from the different modules or plugins.
This class wraps all the data returned by the health node.
Keeps track of several health statuses per node that can be used in health.
A cluster state entry that contains a list of all the thresholds used to determine if a node is healthy.
 
Contains the thresholds necessary to determine the health of the disk space of a node.
 
Contains the thresholds needed to determine the health of a cluster when it comes to the amount of room available to create new shards.
 
Keeps the health metadata in the cluster state up to date.
Main component used for selecting the health node of the cluster
Exception which indicates that no health node is selected in this cluster, aka the health node persistent task is not assigned.
This is a base class for all the requests that will be sent to the health node.
Persistent task executor that is managing the HealthNode.
Encapsulates the parameters needed to start the health node task, currently no parameters are required.
This class periodically logs the results of the Health API to the standard Elasticsearch server log file.
Valid modes of output for this logger
An additional extension point for Plugins that extends Elasticsearch's health indicators functionality.
This service collects health indicators from all modules and plugins of elasticsearch
 
Base class for health trackers that will be executed by the LocalHealthMonitor.
 
 
CircuitBreakerService that attempts to redistribute space between breakers if tripped
A builder for search highlighting.
 
 
 
Highlights a search result.
A field highlighted with its highlighted fragments.
 
 
 
A histogram aggregation.
A bucket in the histogram where documents fall in
A builder for histograms on numeric fields.
Constructs the per-shard aggregator instance for histogram aggregation.
 
Implemented by histogram aggregations and used by pipeline aggregations to insert buckets.
Per-document histogram value.
Per-segment histogram values.
A CompositeValuesSourceBuilder that builds a HistogramValuesSource from another numeric values source using the provided interval.
 
 
 
 
 
 
Tracks a collection of HttpStats.ClientStats for current and recently-closed HTTP connections.
 
 
 
 
A slim interface for precursors to HTTP requests, which doesn't expose access to the request's body, because it's not available yet.
 
A basic http request abstraction.
 
A basic http response abstraction.
This class encapsulates the stats for a single HTTP route MethodHandlers
 
 
 
Dispatches HTTP requests.
 
 
 
 
Serves as a node level registry for hunspell dictionaries.
 
Hyperloglog++ counter, implemented based on pseudo code from this paper and its appendix This implementation is different from the original implementation in that it uses a hash table instead of a sorted list for linear counting.
A mapper for the _id field.
 
Responsible for loading the _id from stored fields or for TSDB synthesizing the _id from the routing, _tsid and @timestamp fields.
Returns a leaf instance for a leaf reader that returns the _id for segment level doc ids.
 
 
 
 
A query that will return only documents matching specific ids (and a type).
Simple class to log ifconfig-style output at DEBUG logging.
A field mapper that records fields that have been ignored because they were malformed.
 
 
Saves malformed values to stored fields so they can be loaded for synthetic _source.
 
This exception defines illegal states of shard routing
Exception thrown if trying to mutate files in an immutable directory.
An immutable map implementation based on open hash map.
 
 
Represents a request to import a particular dangling index, specified by its UUID.
 
 
 
Handles inbound messages by first deserializing a TransportMessage from an InboundMessage and then passing it to the appropriate handler.
 
 
Defines the include/exclude regular expression filtering for string terms aggregation.
 
 
 
 
 
Thrown by Diff.apply(T) method
A value class representing the basic required properties of an Elasticsearch index.
An index abstraction is a reference to one or more concrete indices.
Represents an alias and groups all IndexMetadata instances sharing the same alias name together.
Represents an concrete index and encapsulates its IndexMetadata
An index abstraction type.
 
IndexAnalyzers contains a name to analyzer mapping for a specific index.
 
 
Exception indicating that one or more requested indices are closed.
 
The result of analyzing disk usage of each field in a shard/index
Disk usage stats for a single field
An index event listener is the primary extension point for plugins and build-in services to react / listen to per-index and per-shard events.
Statistics about an index feature.
Describes the capabilities of a field in a single index.
Thread-safe utility class that allows to get per-segment values via the IndexFieldData.load(LeafReaderContext) method.
 
 
 
Simple wrapper class around a filter that matches parent documents and a filter that matches child documents.
A simple field data cache abstraction on the *index* level.
 
 
 
 
Specialization of IndexFieldData for geo points.
A collection of tombstones for explicitly marking indices as deleted in the cluster state.
A class to build an IndexGraveyard.
A class representing a diff of two IndexGraveyard objects.
An individual tombstone entry for representing a deleted index.
Specialization of IndexFieldData for histograms.
Represents a single snapshotted index in the repository.
 
An indexing listener for indexing, delete, events.
A Composite listener that multiplexes calls to each of the listeners methods.
 
 
 
 
 
Class representing an (inclusive) range of long values in a field in an index which may comprise multiple shards.
 
 
 
 
 
Tracks the blob uuids of blobs containing IndexMetadata for snapshots as well an identifier for each of these blobs.
 
 
Observer that tracks changes made to RoutingNodes in order to update the primary terms and in-sync allocation ids in IndexMetadata once the allocation round has completed.
This service is responsible for verifying index metadata when an index is introduced to the cluster, for example when restarting nodes, importing dangling indices, or restoring an index from a snapshot repository.
"Mode" that controls which behaviors and settings an index supports.
IndexModule represents the central extension point for index level custom implementations like: Similarity - New Similarity implementations can be registered through IndexModule.addSimilarity(String, TriFunction) while existing Providers can be referenced through Settings under the IndexModule.SIMILARITY_SETTINGS_PREFIX prefix along with the "type" value.
Directory wrappers allow to apply a function to the Lucene directory instances created by IndexStorePlugin.DirectoryFactory.
 
 
 
 
 
Used to iterate expression lists and work out which expression item is a wildcard or an exclusion.
 
This is a context for the DateMathExpressionResolver which does not require IndicesOptions or ClusterState since it uses only the start time to resolve expressions.
Generates valid Elasticsearch index names.
 
Base class for numeric field data.
The type of number.
Specialization of IndexFieldData for data that is indexed with ordinals.
OutputStream that writes into underlying IndexOutput
An IndexPatternMatcher holds an index pattern in a string and, given a Metadata object, can return a list of index names matching that pattern.
Specialization of IndexFieldData for geo points and points.
Thrown when some action cannot be performed because the primary shard of some shard group in an index has not been allocated post api action.
The index-level query cache.
Index request to index a typed JSON document into a specific index and make it searchable.
An index document action request builder.
A response of an index operation,
Builder class for IndexResponse.
Generates the shard id for (id, routing) pairs.
 
The IndexRoutingTable represents routing information for a single index.
 
Encapsulates all valid index level settings.
 
 
 
 
An IndexSettingProvider is a provider for index level settings that can be set explicitly as a default value (so they show up as "set" for newly created indices)
Infrastructure class that holds services that can be used by IndexSettingProvider instances.
Keeps track of the IndexSettingProvider instances defined by plugins and this class can be used by other components to get access to IndexSettingProvider instances.
This class encapsulates all index level settings and handles settings updates.
 
Simple struct encapsulating a shard failure
 
 
 
 
 
 
 
Generic shard restore exception
Thrown when restore of a shard fails
IndexShardRoutingTable encapsulates all instances of a single shard.
 
 
Generic shard snapshot exception
Thrown when snapshot process is failed on a shard level
Represent shard snapshot status
Used to complete listeners added via IndexShardSnapshotStatus.addAbortListener(org.elasticsearch.action.ActionListener<org.elasticsearch.index.snapshots.IndexShardSnapshotStatus.AbortStatus>) when the shard snapshot is either aborted/paused or it gets past the stages where an abort/pause could have occurred.
Returns an immutable state of IndexShardSnapshotStatus at a given point in time.
Snapshot stage
 
 
 
 
Holds all the information that is used to build the sort order of an index.
 
 
 
A plugin that provides alternative directory implementations.
An interface that describes how to create a new directory instance per shard.
IndexStorePlugin.IndexFoldersDeletionListener are invoked before the folders of a shard or an index are deleted from disk.
An interface that allows to create a new RecoveryState per shard.
An interface that allows plugins to override the IndexCommit of which a snapshot is taken.
 
 
 
The index version.
 
 
 
A handle on the execution of warm-up action.
 
 
Administrative actions/operations against indices.
Cluster state update request that allows to add or remove aliases
A request to add/remove aliases for one or more indices.
Request to take one or more actions on one or more indexes and alias combinations.
 
Builder for request to modify many aliases at once.
Response with error information for a request to add/remove aliases for one or more indices.
Result for a single alias add/remove action
 
 
 
 
 
Base cluster state update request that allows to execute update against multiple indices
 
 
 
 
Configures classes and services that are shared by indices on each node.
Contains all the multi-target syntax options.
 
Controls the way the target indices will be handled.
Applies to all indices already matched and controls the type of indices that will be returned.
 
The "gatekeeper" options apply on all indices that have been selected by the other Options.
 
Controls the way the wildcard expressions will be resolved.
 
 
Needs to be implemented by all ActionRequest subclasses that relate to one or more indices.
This subtype of request is for requests which may travel to remote clusters.
 
For use cases where a Request instance cannot implement Replaceable due to not supporting wildcards and only supporting a single index at a time, this is an alternative interface that the security layer checks against to determine if remote indices are allowed for that Request type.
The indices request cache allows to cache a shard level request stage responses, helping with improving similar requests that are potentially expensive (because of aggs for example).
 
 
 
 
 
 
Response for TransportIndicesShardStoresAction Consists of IndicesShardStoresResponse.StoreStatuss for requested indices grouped by indices and shard ids and a list of encountered node IndicesShardStoresResponse.Failures
Single node failure while retrieving shard store information
Shard store information from a node
The status of the shard store with respect to the cluster
 
A request to get indices level stats.
A request to get indices level stats.
 
 
 
Field mapper that requires to transform its input before indexation through the InferenceService.
Contains inference field data for fields.
 
 
SPI extension that define inference services
 
 
 
 
Holds information about currently in-flight shard level snapshot or clone operations on a per-shard level.
A utility for forwarding ingest requests to ingest nodes in a round-robin fashion.
A script used by ConditionalProcessor.
 
Represents a single document being captured before indexing and holds the source and metadata (like id, type and index).
 
 
Holds the ingest pipelines that are available in the cluster
An extension point for Plugin implementations to add custom ingest processors
A dedicated wrapper for exceptions encountered executing an ingest processor.
A script used by the Ingest Script Processor.
 
Holder class for several ingest related services.
Used by this class and ReservedPipelineAction
Specialized cluster state update task specifically for ingest pipeline operations.
Used in this class and externally by the ReservedPipelineAction
 
 
Container for pipeline stats.
Container for processor stats.
 
Annotates members of your implementation class (constructors, methods and fields) into which the Injector should inject values.
A constructor, field or method that can receive injections.
Builds the graphs of objects that make up your application.
 
 
Context used for inner hits retrieval
A SubSearchContext that associates TopDocs to each SearchHit in the parent search context
 
Context object used to rewrite QueryBuilder instances into an optimized version for extracting inner_hits.
 
 
Defines the type of request, whether the request is to ingest a document or search for a document.
A binding to a single instance.
 
 
 
 
Abstraction of an array of integer values.
 
An internal implementation of Aggregation.
 
Represents a set of InternalAggregations
 
A range aggregation for data that is encoded in doc values using a binary representation.
 
 
 
 
Serialization and merge logic for GeoCentroidAggregator.
 
 
InternalClusterInfoService provides the ClusterInfoService interface, routinely updated on a timer.
 
 
Internal context.
Implementation of Histogram.
 
 
 
 
 
 
 
 
 
 
Creates objects which will be injected.
ES: An factory that returns a pre created instance.
 
 
 
 
Serialization and merge logic for GeoCentroidAggregator.
 
 
Represents a grid of cells where each cell's location is determined by a specific geo hashing algorithm.
 
Represents a grid of cells where each cell's location is determined by a geohash.
 
Represents a grid of cells where each cell's location is determined by a geohash.
 
A global scope get (the document set on which we aggregate is all documents in the search context (ie.
 
 
 
 
Implementation of Histogram.
 
 
 
 
 
 
Common superclass for results of the terms aggregation on mapped fields.
 
 
 
Helps to lazily construct the aggregation list for reduction
 
 
Result of the NestedAggregator.
 
 
 
Implementations for MultiBucketsAggregation.Bucket ordering strategies.
MultiBucketsAggregation.Bucket ordering strategy to sort by a sub-aggregation.
MultiBucketsAggregation.Bucket ordering strategy to sort by multiple criteria.
Contains logic for parsing a BucketOrder from a XContentParser.
Contains logic for reading/writing BucketOrder from/to streams.
 
 
This class wraps a Lucene Collector and times the execution of: - setScorer() - collect() - doSetNextReader() - needsScores()
 
 
 
 
 
 
Reads a bucket.
 
 
 
 
Result of the significant terms aggregation.
 
Reads a bucket.
 
A base class for all the single bucket aggregations.
 
 
 
 
 
 
 
 
 
 
Reads a bucket.
Results of the TopHitsAggregator.
An internal implementation of ValueCount.
 
 
 
 
Constructs an IntervalsSource based on analyzed text
Base class for scripts used as interval filters, see IntervalsSourceProvider.IntervalFilter
 
 
Builder for IntervalQuery
Factory class for IntervalsSource Built-in sources include IntervalsSourceProvider.Match, which analyzes a text string and converts it to a proximity source (phrase, ordered or unordered depending on how strict the matching should be); IntervalsSourceProvider.Combine, which allows proximity queries between different sub-sources; and IntervalsSourceProvider.Disjunction.
 
 
 
 
 
 
 
 
 
 
 
Represents a repository that exists in the cluster state but could not be instantiated on a node, typically due to invalid configuration.
Thrown on the attempt to create a snapshot with invalid name
 
IP address for use in scripting.
 
Used if we do not have global ordinals, such as in the IP runtime field see: IpScriptFieldData
Used if we have access to global ordinals
A FieldMapper for ip addresses.
 
 
Script producing IP addresses.
 
 
 
A ip prefix aggregation.
A bucket in the aggregation where documents fall in
A builder for IP prefix aggregations.
 
An IP prefix aggregator for IPv6 or IPv4 subnets.
 
 
 
This class contains utility functionality to build an Automaton based on a prefix String on an `ip` field.
 
 
 
BlockDocValuesReader implementation for keyword scripts.
 
 
 
 
 
 
 
 
A response to an action which updated the cluster state, but needs to report whether any relevant nodes failed to apply the update.
 
A class encapsulating the usage of a particular "thing" by something else
 
 
 
A parser for date/time formatted text with optional date math.
 
 
Triggered by a StartJoinRequest, instances of this class represent join votes, and have a voting and master-candidate node.
Handler for cluster join commands.
 
Tracks nodes that were recently in the cluster, and uses this information to give extra details if these nodes rejoin the cluster.
 
 
 
 
Coordinates the join validation process.
Outputs the Throwable portion of the LoggingEvent as a Json formatted field with array "exception": [ "stacktrace", "lines", "as", "array", "elements" ] Reusing @link org.apache.logging.log4j.core.pattern.ExtendedThrowablePatternConverter which already converts a Throwable from LoggingEvent into a multiline string
 
 
 
 
 
 
 
 
 
 
 
 
Binding key consisting of an injection type and an optional annotation.
Defines behavior for comparing bucket keys to imposes a total ordering of buckets of the same type.
The atomic field data implementation for FlattenedFieldMapper.KeyedFlattenedFieldType.
This class manages locks.
An EnvironmentAwareCommand that needs to access the elasticsearch keystore, possibly decrypting it if it is password protected.
A disk based container for sensitive settings in Elasticsearch.
 
A field mapper for keywords.
 
 
 
 
BlockDocValuesReader implementation for keyword scripts.
 
 
 
A knn retriever is used to represent a knn search with some elements to specify parameters for that knn search.
A query that matches the provided docs with their scores.
A query that matches the provided docs with their scores.
Defines a kNN search to run in the search request.
 
A builder used in RestKnnSearchAction to convert the kNN REST request into a SearchRequestBuilder.
A query that performs kNN search using Lucene's KnnFloatVectorQuery or KnnByteVectorQuery.
A publication can succeed and complete before all nodes have applied the published state and acknowledged it; however we need every node eventually either to apply the published state (or a later state) or be removed from the cluster.
 
An action listener that allows passing in a CountDownLatch that will be counted down after onResponse or onFailure is called
 
 
Lucene geometry query for BinaryShapeDocValuesField.
Encapsulates a CheckedSupplier which is lazily invoked once on the first call to #getOrCompute().
API that lazily rolls over a data stream that has the flag DataStream.rolloverOnWrite() enabled.
 
This is a modified version of SoftDeletesDirectoryReaderWrapper that materializes the liveDocs bitset lazily.
 
 
 
Tracks the state of sliced subtasks and provides unified status information for a sliced BulkByScrollRequest.
The LeaderChecker is responsible for allowing followers to check that the currently elected leader is still connected and healthy.
 
Collects results for a particular segment.
A LeafBucketCollector that delegates all calls to the sub leaf aggregator and sets the scorer on its source of values if it implements ScorerAware.
 
Specialization of LeafNumericFieldData for floating-point numerics.
The thread safe LeafReader level cache of the data.
Defines how to populate the values of a FieldLookup
 
LeafFieldData specialization for histogram data.
Specialization of LeafNumericFieldData for integers.
Manages loading information about nested documents for a single index segment
Specialization of LeafFieldData for numeric data.
Specialization of LeafFieldData for data that is indexed with ordinals.
LeafFieldData specialization for geo points and points.
Provides direct access to a LeafReaderContext
RuntimeField base class for leaf fields that will only ever return a single MappedFieldType from RuntimeField.asMappedFieldTypes().
Per-leaf ScoreFunction.
Per-segment version of SearchLookup.
Loads stored fields for a LeafReader Which stored fields to load will be configured by the loader's parent StoredFieldLoader
 
Leak tracking mechanism that allows for ensuring that a resource has been properly released before a given object is garbage collected.
Deprecated.
BM25Similarity should be used instead
 
Legacy version of PerFieldMapperCodec.
 
Field mapper to access the legacy _type that existed in Elasticsearch 5
 
Lifecycle state.
 
 
Contains information about the execution of a lifecycle policy for a single index, and serializes/deserializes this information to and from custom index metadata.
 
 
 
This analyzer limits the highlighting once it sees a token with a start offset <= the configured limit, which won't pass and will end the stream.
 
 
Linear interpolation smoothing model.
See the EDSL examples at Binder.
 
A binding to a linked key.
 
 
Models a response to a ListDanglingIndicesRequest.
An ActionListener which allows for the result to fan out to a (dynamic) collection of other listeners, added using SubscribableListener.addListener(org.elasticsearch.action.ActionListener<T>).
An ActionListener which allows for the result to fan out to a (dynamic) collection of other listeners, added using SubscribableListener.addListener(org.elasticsearch.action.ActionListener<T>).
 
A request to get node tasks
Builder for the request to retrieve the list of tasks running on the specified nodes
Returns the list of tasks currently running on the nodes
Maps _uid value to its version information.
 
Keeps track of the old map of a LiveVersionMap that gets evacuated on a refresh
 
 
 
This class generates sequences numbers and keeps track of the so-called "local checkpoint" which is the highest number for which all previous sequence numbers have been processed (inclusive).
Utilities for for dealing with Locale objects
This class monitors the local health of the node, such as the load and any errors that can be specific to a node (as opposed to errors that are cluster-wide).
An implementation of SecureSettings which loads the secrets from externally mounted local directory.
Used to execute things on the master service thread on nodes that are not necessarily master
Enables listening to master changes events of the local node (when the local node becomes the master, and when the local node cease being a master).
Converts utc into local time and back again.
 
How to get instances of LocalTimeOffset.
 
 
 
 
 
 
An InfoStream (for Lucene's IndexWriter) that redirects messages to "lucene.iw.ifd" and "lucene.iw" Logger.trace.
Format string for Elasticsearch log messages.
A set of utilities around Logging.
 
Elasticsearch plugins may provide an implementation of this class (via SPI) in order to add extra fields to the JSON based log file.
Logs deprecations to the DeprecationLogger.
An ActionListener that just logs the task and its response at the info level.
This Elastic-internal API bridge class exposes package-private components of Ingest in a way that can be consumed by Logstash's Elastic Integration Filter without expanding Elasticsearch's externally-consumable API.
Abstraction of an array of long values.
A monotonically increasing long metric based on a callback.
Represent hard_bounds and extended_bounds in date-histogram aggregations.
A monotonically increasing metric that uses a long.
 
 
 
 
 
Record non-additive long values based on a callback
This wrapper allow us to record metric with APM (via LongGauge) while also access its current state via AtomicLong
Specialized hash table implementation similar to BytesRefHash that maps long values to ids.
Record arbitrary values that are summarized statistically, useful for percentiles and histograms.
Maps owning bucket ordinals and long bucket keys to bucket ordinals.
An iterator for buckets inside a particular owningBucketOrd.
Implementation that works properly when collecting from many buckets.
Implementation that packs the owningbucketOrd into the top bits of a long and uses the bottom bits for the value.
Implementation that only works if it is collecting from a single bucket.
Specialized hash table implementation similar to BytesRefHash that maps two long values to ids.
A hash table from native longs to objects.
 
Result of the RareTerms aggregation when the field is some kind of whole number like a integer, long, or a date.
 
An aggregator that finds "rare" string values (e.g.
BlockDocValuesReader implementation for long scripts.
 
 
 
 
 
 
 
 
 
 
Result of the TermsAggregator when the field is some kind of whole number like a integer, long, or a date.
 
A counter that supports decreasing and increasing values.
Comparator source for long values.
 
A LookupField is an **unresolved** fetch field whose values will be resolved later in the fetch phase on the coordinating node.
A runtime field that retrieves fields from related indices.
Normalizer used to lowercase values
Builds an analyzer for normalization that lowercases terms.
 
Fork of Document with additional functionality.
 
 
This file is forked from the https://netty.io project.
 
This class represents the manifest file, which is the entry point for reading meta data from disk.
An API to bind multiple map entries separately, only to later inject them as a complete map.
The actual mapbinder plays several roles:
 
 
 
This defines the core properties and functions to operate on a field.
 
Operation to specify what data structures are used to retrieve field data from and generate a representation of doc values.
 
An enum used to describe the relation between the range of terms in a shard when compared with a query range
 
 
 
Holds context for building Mapper objects from their Builders
 
 
Holds context used when merging mappings.
 
An extension point for Plugin implementations to add custom mappers
A registry for all field mappers.
 
The reason why a mapping is being merged.
Wrapper around everything that defines a mapping, without references to utility classes like MapperService, ...
A (mostly) immutable snapshot of the current mapping of an index with access to everything we need for the search phase.
Key for the lookup to be used in caches.
Mapping configuration for a type.
Parser for Mapping provided in CompressedXContent format
Holds everything that is needed to parse mappings.
Usage statistics about mappings usage.
Called by shards in the cluster when their mapping was dynamically updated and it needs to be updated in the cluster state meta data (and broadcast to all members).
 
 
 
An immutable implementation of Map.Entry.
An aggregator of string values that hashes the strings on the fly rather than up front like the GlobalOrdinalsStringTermsAggregator.
 
Abstraction on top of building collectors to fetch values so terms, significant_terms, and significant_text can share a bunch of aggregation code.
Fetch values from a ValuesSource.
This class represents a node's view of the history of which nodes have been elected master over the last 30 minutes.
This action is used to fetch the MasterHistory from a remote node.
 
 
This transport action fetches the MasterHistory from a remote node.
This service provides access to this node's view of the master history, as well as access to other nodes' view of master stability.
 
Base request builder for master node operations
Base request builder for master node read operations that can be executed on the local node as well
Base request for master based read operations that allows to read the cluster state from the local node if needed
A based request for master based operation.
 
 
A queue of tasks for the master service to execute.
An optimized implementation of BitSet that matches all documents to reduce memory usage.
A query that matches on all documents.
The boolean prefix query analyzes the input text and creates a boolean query containing a Term query for each term, except for the last term, which is used to create a prefix query
 
Returns true or false for a given input.
 
Matcher implementations.
A query that matches no document.
 
Match query is a query that analyzes the text and constructs a phrase prefix query as the result of the analysis.
Match query is a query that analyzes the text and constructs a phrase query as the result of the analysis.
Match query is a query that analyzes the text and constructs a query as the result of the analysis.
 
 
 
Condition for index maximum age.
 
 
 
Condition for maximum index docs.
Condition for maximum shard docs.
A size-based condition for the primary shards within an index.
An allocation decider that prevents shards from being allocated on any node if the shards allocation has been retried N times without success.
A collector that computes the maximum score.
A maximum size-based condition for an index size.
 
An aggregation that approximates the median absolute deviation of a numeric field
 
 
 
 
Injects dependencies into the fields and methods on instances of type T.
Utility methods to get memory sizes.
Query merging two point in range queries.
A shard in elasticsearch is a Lucene index, and a Lucene index is broken down into segments.
 
 
The merge scheduler (ConcurrentMergeScheduler) controls the execution of merge operations once they are needed (according to the merge policy).
 
An error message and the context in which it occurred.
This MessageDigests class provides convenience methods for obtaining thread local MessageDigest instances for MD5, SHA-1, SHA-256 and SHA-512 message digests.
Metadata is the part of the ClusterState which persists across restarts.
Ingest and update metadata available to write scripts.
 
Custom metadata that persists (via XContent) across restarts.
The properties of a metadata field.
The operation being performed on the value in the map.
 
 
 
Service responsible for submitting create index requests
Handles data stream modification requests.
Deletes indices.
 
A mapper for a builtin field containing metadata about a document.
 
 
A type parser for an unconfigurable metadata field.
 
Service responsible for submitting add and remove aliases requests
Service responsible for submitting open/close index requests as well as for adding index blocks
Service responsible for submitting index templates updates
 
 
Service responsible for submitting mapping changes
 
 
Service responsible for handling rollover requests for write aliases and data streams
 
 
MetadataStateFormat is a base class to write checksummed XContent based files to one or more directories in a standardized directory structure.
Service responsible for submitting update index settings requests
Upgrades Metadata on startup on behalf of installed Plugins
Handles writing and loading Manifest, Metadata and IndexMetadata as used for cluster state persistence in versions prior to Version.V_7_6_0, used to read this older format during an upgrade from these versions.
 
Container for metering instruments.
 
Counterpart to AggregationInspectionHelper, providing helpers for some aggs that have package-private getters.
 
 
 
Handles updating the FeatureMigrationResults in the cluster state.
 
Condition for index minimum age.
 
 
MinAndMax<T extends Comparable<? super T>>
A class that encapsulates a minimum and a maximum, that are of the same type and Comparable.
 
 
A Query that only matches documents that are greater than or equal to a configured doc ID.
Condition for minimum index docs.
Condition for minimum shard docs.
A size-based condition for the primary shards within an index.
A Scorer that filters out documents that have a score that is lower than a configured constant.
A minimum size-based condition for an index size.
A missing aggregation.
 
 
 
 
Exception indicating that not all requested operations from LuceneChangesSnapshot are available.
 
Utility class that allows to return views of ValuesSources that replace the missing value with a configured value.
 
 
Represents the portion of a model that contains sensitive data
 
 
A module contributes configuration information, typically interface bindings, which will be used to create an Injector.
 
Support methods for creating a synthetic module.
 
 
A more like this query that finds documents that are "like" the provided set of document(s).
A single item to be used for a MoreLikeThisQueryBuilder.
Static methods for working with types that we aren't publishing in the public Types API.
 
 
The WildcardType interface supports multiple upper bounds and multiple lower bounds.
Deprecated.
Only for 7.x rest compat
A command that moves a shard from a specific node to another node.
Note: The shard needs to be in the state ShardRoutingState.STARTED in order to be moved.
Represents a decision to move a started shard, either because it is no longer allowed to remain on its current node or because moving it to another node will form a better cluster balance.
Provides a collection of static utility methods that can be referenced from MovingFunction script contexts
An API to bind multiple values separately, only to later inject them as a complete collection.
The actual multibinder plays several roles:
A BucketCollector which allows running a bucket collection with several BucketCollectors.
An aggregation service that creates instances of MultiBucketConsumerService.MultiBucketConsumer.
An IntConsumer that throws a MultiBucketConsumerService.TooManyBucketsException when the sum of the provided values is above the limit (`search.max_buckets`).
 
An aggregation that returns multiple buckets
A bucket represents a criteria to which all documents that fall in it adhere to.
 
File chunks are sent/requested sequentially by at most one thread at any time.
 
A session that can perform multiple gets without wrapping searchers multiple times.
 
A stateful lightweight per document set of GeoPoint values.
A single multi get response.
 
A single get item.
A multi get document action request builder.
 
Represents a failure.
 
 
Same as MatchQueryBuilder but supports multiple fields.
 
 
Ordinals implementation which is efficient at storing field data ordinals for multi-valued or sparse fields.
 
A stateful lightweight per document set of SpatialPoint values.
A multi search API request.
A request builder for multiple search requests.
A multi search response.
 
A search response item, holding the actual search response, or an error message if it failed.
 
 
A single multi get response.
 
 
 
Represents a failure.
 
 
 
Defines what values to pick in the case a document contains multiple values for a particular field.
Class to encapsulate a set of ValuesSource objects labeled by field name
 
Similar to ValuesSourceAggregationBuilder, except it references multiple ValuesSources (e.g.
 
 
 
 
 
Wraps MurmurHash3 to provide an interface similar to MessageDigest that allows hashing of byte arrays passed through multiple calls to Murmur3Hasher.update(byte[]).
Hash function based on the Murmur3 algorithm, which is the default as of Elasticsearch 2.0.
MurmurHash3 hashing functions.
A 128-bits hash.
 
 
Annotates named things.
Named analyzer is an analyzer wrapper around an actual analyzer (NamedAnalyzer.analyzer that is associated with a name (NamedAnalyzer.name().
Reads named components declared by a plugin in a cache file.
NamedDiff<T extends Diffable<T>>
Diff that also support NamedWriteable interface
Diff that also support VersionedNamedWriteable interface
Value Serializer for named diffables
A registry from String to some class implementation.
A Writeable object identified by its name.
Wraps a StreamInput and associates it with a NamedWriteableRegistry
A registry for Writeable.Reader readers of NamedWriteable.
An entry in the registry, made up of a category class and name, and a reader for that category class.
Provides named XContent parsers.
 
 
A nested aggregation.
 
 
 
 
Manages loading information about nested documents
Utility class to filter parent and children clauses when building nested queries.
Holds information about nested mappings
A Mapper for nested objects
 
 
 
 
 
During query parsing this keeps track of the current nested level.
 
Utility methods for dealing with nested mappers
 
Utility functions for presentation of network addresses.
 
Represents a transport message sent over the network.
A module to handle registering and binding all network related classes.
Deprecated.
 
A custom name resolver can support custom lookup keys (my_net_key:ipv4) and also change the default inet address used in case no settings is provided.
 
Utilities for network interfaces / addresses binding and publishing.
A specific type of SettingsException indicating failure to load a class based on a settings value.
A node represent a node within a cluster (cluster.name).
Tracks the order in which nodes are used for allocation so that we can allocate shards to nodes in a round-robin fashion (all else being equal).
This class represents the shard allocation decision and its explanation for a single node.
A class that captures metadata about a shard store on a node.
 
Deprecated.
this class is kept in order to allow working log configuration from 7.x
The NodeAndClusterIdStateListener listens to cluster state changes and ONLY when receives the first update it sets the clusterUUID and nodeID in log4j pattern converter NodeIdConverter.
Client that executes actions on the local node.
An exception indicating that node is closed.
This component is responsible for maintaining connections from this node to all the nodes listed in the cluster state, and for disconnecting from nodes once they are removed from the cluster state.
 
A component that holds all data paths for a single node.
 
 
A functional interface that people can use to reference NodeEnvironment.shardLock(ShardId, String, long)
A feature published by a node.
Used when querying every node in the cluster for a specific dangling index.
Used when querying every node in the cluster for a specific dangling index.
This exception is thrown if the File system is reported unhealthy by @FsHealthService and this nodes needs to be removed from the cluster
 
 
Pattern converter to format the node_id variable into JSON fields node.id .
Global information on indices stats running on a specific node.
Node information (static, does not change over time).
 
 
 
Used when querying every node in the cluster for dangling indices, in response to a list request.
Used when querying every node in the cluster for dangling indices, in response to a list request.
Node stats for mappings, useful for estimating the overhead of MappingLookup on data nodes.
Metadata associated with this node: its persistent node ID and its version.
NodeMetrics monitors various statistics of an Elasticsearch node and exposes them as metrics through the provided MeterRegistry.
Converts %node_name in log4j patterns into the current node name.
An exception indicating that a message is sent to a node that is not connected.
This component is responsible for execution of persistent tasks.
A node-specific request derived from the corresponding PrevalidateShardPathRequest.
 
An allocation decider that ensures that all the shards allocated to the node scheduled for removal are relocated to the replacement node.
 
 
 
 
 
An allocation decider that prevents shards from being allocated to a node that is in the process of shutting down.
This class is a container that encapsulates the necessary information needed to indicate which node information is requested.
An enumeration of the "core" sections of metrics that may be requested from the nodes information endpoint.
A request to get node (cluster) level information.
 
 
 
Request for a reload secure settings action
 
The response for the reload secure settings action
 
 
 
 
The prevalidation result of a node.
Contains the data about nodes which are currently configured to shut down, either permanently or temporarily.
Handles diffing and appling diffs for NodesShutdownMetadata as necessary for the cluster state infrastructure.
A request to get node (cluster) level stats.
 
This class encapsulates the metrics and other information needed to define scope when we are requesting node stats.
An enumeration of the "core" sections of metrics that may be requested from the nodes stats endpoint.
 
Node statistics (dynamic, changes depending on when created).
 
 
The response for the nodes usage api which contains the individual usage statistics for all nodes queried.
 
 
An exception thrown during node validation.
An allocation decider that prevents relocation or allocation from nodes that might not be version compatible.
 
An aggregator that is not collected, this can typically be used when running an aggregation over a field that doesn't have a mapping.
An NumericMetricsAggregator.SingleValue that is not collected, this can typically be used when running an aggregation over a field that doesn't have a mapping.
An NumericMetricsAggregator.SingleValue that is not collected, this can typically be used when running an aggregation over a field that doesn't have a mapping.
Class that returns a breaker that never breaks
 
 
A Similarity that rejects negative scores.
An exception indicating no node is available to perform the operation.
A wrapper class for notifying listeners on non cluster state transformation operation completion.
A CircuitBreaker that doesn't increment or adjust, and all operations are basically noops
NoOpEngine is an engine implementation that does nothing but the bare minimum required in order to have an engine.
A CharFilterFactory that also supports normalization The default implementation of NormalizingCharFilterFactory.normalize(Reader) delegates to CharFilterFactory.create(Reader)
A TokenFilterFactory that may be used for normalization The default implementation delegates NormalizingTokenFilterFactory.normalize(TokenStream) to TokenFilterFactory.create(TokenStream)}.
Thrown after completely failing to connect to any node of the remote cluster.
 
 
An exception that remote cluster is missing or connectivity to the remote connection is failing
Exception indicating that we were expecting something compressed, which was not compressed or corrupted so that the compression format could not be detected.
Exception which indicates that an operation failed because the node stopped being the elected master.
Exception which indicates that an operation failed because the node stopped being the node on which the PersistentTask is allocated.
This exception can be used to wrap a given, not serializable exception to serialize via StreamOutput.writeException(Throwable).
 
Exception indicating that we were expecting some XContent but could not detect its type.
Whether a member supports null values injected.
A FieldMapper for numeric types: byte, short, int, long, float and double.
 
 
 
A set of utilities for numbers.
 
A factory to construct stateful NumberSortScript factories for a specific index.
A factory to construct NumberSortScript instances.
A per-document numeric value.
An aggregator for numeric values.
 
 
 
 
 
 
 
 
 
 
Abstraction of an array of object values.
A priority queue maintains a partial ordering of its elements such that the least element can always be found in constant time.
 
 
 
 
 
 
A hash table from objects to objects.
 
This class provides helpers for ObjectParser that allow dealing with classes outside of the xcontent dependencies.
A wrapping processor that adds failure handling logic around the wrapped processor.
Represents a single on going merge within an index.
Represents the behaviour when a runtime field or an index-time script fails: either fail and raise the error, or continue and ignore the error.
 
Cluster state update request that allows to open one or more indices
A request to open an index.
Builder for for open index request
A response for a open index action.
 
 
 
The purpose of an operation against the blobstore.
 
 
Condition for automatically increasing the number of shards for a data stream.
A potentially-missing BytesReference, used to represent the contents of a blobstore register along with the possibility that the register could not be read.
This class iterates all shards from all nodes in order of allocation recency.
A thread safe ordinals abstraction.
 
Simple class to build document ID <-> ordinal mapping.
 
Used to keep track of original indices within internal (e.g.
A Client that sends requests with the origin set to a particular value and calls its ActionListener in its original ThreadContext.
 
The OsProbe class retrieves information about the physical and swap size of the machine memory, as well as the system load average and cpu load.
 
 
Encapsulates basic cgroup statistics.
Encapsulates CPU time statistics.
 
 
 
 
 
A recycler of fixed-size pages.
 
 
 
 
A page based bytes reference, internally holding the bytes in a paged data structure.
A paged result that includes total number of results and the results for the current page
A Client that sets the parent task on all requests that it makes.
The result of parsing a document.
The result of parsing a query.
 
Exception that can be used when parsing queries with a given XContentParser.
Mapper for pass-through objects.
 
 
 
 
 
Service in charge of computing a ShardRecoveryPlan using only the physical files from the source peer.
 
 
The source recovery accepts recovery requests from other peer shards and start the recovery process from this source shard to the target shard.
 
The recovery target handles recoveries of peer shards of the shard+node to recover to.
 
 
 
 
Class encapsulating stats about the PendingClusterStatsQueue
 
 
 
 
 
 
 
 
An aggregation that computes approximate percentiles given values.
 
An aggregation that computes approximate percentiles.
 
 
 
 
 
A small config object that carries algo-specific settings.
 
 
An enum representing the methods for calculating percentiles
Class that encapsulates the logic of figuring out the most appropriate file format for a given field, across postings, doc values and vectors.
This Lucene codec provides the default PostingsFormat and KnnVectorsFormat for Elasticsearch.
Stores cluster metadata in a bare Lucene index (per data path) split across a number of documents.
 
 
 
Exception which indicates that the PersistentTask node has not been assigned yet.
Parameters used to start persistent task
Plugin for registering persistent tasks executors.
Response upon a successful start or an persistent task
Component that runs only on the master node and is responsible for assigning running tasks to nodes
A cluster state record that contains a list of all running persistent tasks
 
 
A record that represents a single running persistent task
An executor of tasks that can survive restart of requesting or executing node.
Components that registers all persistent task executors
This component is responsible for coordination of execution of persistent tasks on individual nodes.
 
This service is used by persistent tasks and allocated persistent tasks to communicate changes to the master node so that the master can update the cluster state and can track of the states of the persistent tasks.
 
PersistentTaskState represents the state of the persistent tasks, as it is persisted in the cluster state.
 
Suggestion entry returned from the PhraseSuggester.
 
 
Defines the actual suggest command for phrase suggestions ( phrase).
A pipeline is a list of Processor instances grouped under a unique id.
A factory that knows how to create an PipelineAggregator of a specific type.
 
 
Tree of PipelineAggregators to modify a tree of aggregations after their final reduction.
 
Encapsulates a pipeline's id and configuration as a blob
 
 
Mapper that is used to map existing fields in legacy indices (older than N-1) that the current version of ES can't access anymore.
 
 
 
 
 
The PlainShardIterator is a ShardsIterator which iterates all shards or a given shard id
A simple ShardsIterator that iterates a list or sub-list of shard indexRoutings.
Encapsulates platform-dependent methods for handling native components of plugins.
An extension point allowing to plug in custom functionality.
Provides access to various Elasticsearch services.
Information about APIs extended by a custom plugin.
A "bundle" is a group of jars that will be loaded in their own classloader
Describes the relationship between an interface and an implementation for a Plugin component.
An in-memory representation of the plugin descriptor.
 
 
Runtime information about a plugin that was loaded.
Information about plugins and modules
The PluginShutdownService is used for the node shutdown infrastructure to signal to plugins that a shutdown is occurring, and to check whether it is safe to shut down.
 
Utility methods for loading plugin information from disk and for sorting lists of plugins
 
A search request with a point in time will execute using the reader contexts associated with that point time instead of the latest reader contexts.
Per-document geo-point or point values.
 
 
 
 
Action for beginning a system feature upgrade
Request to begin an upgrade of system features
The response to return to a request for a system feature upgrade
A data class representing a feature that to be upgraded
 
CircuitBreakerService that preallocates some bytes on construction.
 
 
 
 
The strategy of caching the analyzer ONE Exactly one version is stored.
 
 
Shared implementation for pre-configured analysis components.
Provides pre-configured, shared CharFilters.
Provides pre-configured, shared TokenFilters.
Provides pre-configured, shared Tokenizers.
Routing Preference Type
A Query that matches documents containing terms with a specified prefix.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
The primary shard allocator allocates unassigned primary shards to nodes that hold valid copies of the unassigned primaries.
 
A prioritizing executor which uses a priority queue as a work queue.
 
 
PrioritizedThrottledTaskRunner performs the enqueued tasks in the order dictated by the natural ordering of the tasks, limiting the max number of concurrently running tasks.
 
A comparator that compares ShardRouting instances based on various properties.
Returns a binder whose configuration information is hidden from its environment by default.
The result of a "probe" connection to a transport address, if it successfully discovered a valid node and established a full connection with it.
 
 
 
A processor implementation may modify the data belonging to a document.
A factory that knows how to construct a processor based on a map of maps.
Infrastructure class that holds services that can be used by processor factories to create processor instances and that gets passed around to all IngestPlugins.
 
 
 
 
 
 
 
A CollectorManager that takes another CollectorManager as input and wraps all Collectors generated by it in an InternalProfileCollector.
The result of a profiled *thing*, like a query or an aggregation.
Wrapper around all the profilers that makes management easier.
Weight wrapper that will compute how much time it takes to build the Scorer and then return a Scorer that is wrapped in order to compute timings as well.
 
 
This interface includes the declaration of an abstract method, profile().
 
 
A mapper for the _id field that reads the from the SourceToParse.id().
An object capable of providing instances of type T.
A binding to a Provider that delegates to the binding for the provided type.
A binding to a provider instance.
 
A binding to a provider key.
A lookup of the provider for a type.
 
Static utility methods for creating and working with instances of Provider.
A provider with dependencies on other injected types.
Indicates that there was a runtime failure while providing an instance.
 
 
 
Implements the low-level mechanics of sending a cluster state to other nodes in the cluster during a publication.
Class encapsulating stats about the PublishClusterStateAction
Request which is used by the master node to publish cluster state changes.
Response to a PublishRequest, carrying the term and version of the request.
Response to a PublishRequest.
An action for putting a single component template into the cluster state
A request for putting a single component template into the cluster state
A request to create an index template.
 
Cluster state update request that allows to put a mapping
Puts mapping definition into one or more indices.
Builder for a put mapping request
 
 
 
Register repository request.
Register repository request builder
 
 
 
 
 
 
 
 
Utility class to create search queries.
 
 
 
 
Defines a query parser that is able to parse QueryBuilders from XContent.
Helpers to extract and expand field names and boosts
 
Query phase of a search request, used to run the query and get back from each shard information about the matching documents (document ids and score or sort criteria) so that matches can be reduced on the coordinating node
Top-level collector used in the query phase to perform top hits collection as well as aggs collection.
 
Includes result returned by a search operation as part of the query phase.
A ArraySearchPhaseResults implementation that incrementally reduces aggregation results as shard results are consumed.
A record of timings for the various operations that may happen during query execution.
This class acts as a thread-local storage for profiling a query.
A container class to hold the profile results for a single shard in the request.
 
 
 
 
Context object used to rewrite QueryBuilder instances into simplified version.
 
 
Exception that is thrown when creating lucene queries on the shard
A query that parses a query string and runs it.
A QueryParser that uses the MapperService in order to build smarter queries based on the mapping information.
 
Adapts a Lucene Query to the behaviors used be the FiltersAggregator.
This exception can be used to indicate various reasons why validation of a query has failed.
Provides a mechanism for building a KNN query vector in an asynchronous manner during the rewrite phase
TermsEnum that takes a CircuitBreaker, increasing the breaker every time .next(...) is called.
Provides factory methods for producing reproducible sources of randomness.
 
 
 
A query that randomly matches documents with a user-provided probability within a geometric distribution
Pseudo randomly generate a score for each LeafScoreFunction.score(int, float).
A function that computes a random score for the matched documents
A range aggregation.
A bucket associated with a specific range
 
Aggregator for range.
 
 
 
 
A FieldMapper for indexing numeric and date ranges, and creating queries
 
 
Class defining a range
 
 
A Query that matches documents within an range of terms.
Enum defining the type of range
 
RankContextBuilder is used as a base class to manage input, parsing, and subsequent generation of appropriate contexts for handling searches that require multiple queries for global rank relevance.
RankContext is a base class used to generate ranking results on the coordinator and then set the rank for any search hits that are found.
RankDoc is the base class for all ranked results.
Manages the appropriate values when executing multiple queries on behalf of ranking for a single ranking query.
RankShardContext is a base class used to generate ranking results on each shard where it's responsible for executing any queries during the query phase required for its global ranking method.
This is an interface used a marker for a VersionedNamedWriteable.
 
A bucket that is associated with a single term
 
 
 
A filter used for throttling deprecation logs.
Rate limiting wrapper for InputStream
 
Utility class to represent ratio and percentage values between 0 and 100
Requests that implement this interface will be compressed when TransportSettings.TRANSPORT_COMPRESS is configured to Compression.Enabled.INDEXING_DATA and isRawIndexingData() returns true.
Raw, unparsed status from the task results index.
Holds a reference to a point in time Engine.Searcher that will be used to construct SearchContext.
 
A listener to be notified when the readiness service establishes the port it's listening on.
A basic read-only engine that allows switching a shard to be true read-only temporarily or permanently.
Indicates that a request can execute in realtime (reads from the translog).
Only allow rebalancing when all shards are active within the shard replication group.
 
Computes the optimal configuration of voting nodes in the cluster.
 
This class holds a collection of all on going recoveries on the current node (i.e., the node is the target node of those recoveries).
a reference to RecoveryTarget, which implements Releasable.
Recovery information action
 
 
 
 
 
 
A plugin that allows creating custom RecoveryPlannerService.
 
Request for recovery information
Recovery information request builder.
 
Information regarding the recovery state of indices and their associated shards.
 
 
 
Represents the recovery source of a shard.
Recovery from a fresh copy
Recovery from an existing on-disk store
recovery from other shards on same node (shrink index action)
peer recovery from a primary shard
recovery from a snapshot
 
RecoverySourceHandler handles the three phases of shard recovery, which is everything relating to copying the segment files as well as sending translog operations across the wire once the segments have been copied.
Keeps track of state related to shard recovery.
 
 
 
 
 
 
 
Recovery related statistics, starting at the shard level and allowing aggregation to indices and node level
Represents a recovery where the current node is the target node of the recovery.
 
 
 
A recycled object, note, implementations should support calling obtain and then recycle on different threads.
 
 
 
A @link StreamOutput that uses Recycler.V<org.apache.lucene.util.BytesRef> to acquire pages of bytes, which avoids frequent reallocation & copying of the internal data.
 
A failure during a reduce phase (when receiving results from several shards, and reducing them into one or more results and possible actions).
Represents a request for starting a peer recovery.
Same as ThreadedActionListener but for RefCounted types.
A mechanism to complete a listener on the completion of some (dynamic) collection of other actions.
A mechanism to trigger an action on the completion of some (dynamic) collection of other actions.
Encapsulates links to pages in the reference docs, so that for example we can include URLs in logs and API outputs.
 
 
Allows for the registration of listeners that are called when a change becomes visible for search.
A refresh request making all operations performed since the last refresh available for search.
A refresh request making all operations performed since the last refresh available for search.
 
 
Regular expression options for completion suggester
Options for regular expression queries
Regular expression syntax flags.
A Query that does fuzzy matching for a specific value.
 
Metadata for the ReindexScript context.
Request to reindex some documents from one index to another.
 
A script used in the reindex api
 
 
A byte size value that allows specification using either of: 1.
 
An extension to BytesReference that requires releasing its content.
An bytes stream output that allows providing a BigArrays instance expecting it to require releasing its content (BytesStreamOutput.bytes()) once done.
Releasable lock used inside of Engine implementations
 
 
 
An extension point for Plugins that can be reloaded.
Request for reloading index search analyzers
The response object that will be returned when reloading analyzers
 
A plugin that may receive a ReloadablePlugin in order to call its ReloadablePlugin.reload(Settings) method.
Holds additional information as to why the shard failed to relocate.
Base class for all services and components that need up-to-date information about the registered remote clusters
 
A client which can execute requests on a specific remote cluster.
 
 
 
 
 
 
Contains the settings and some associated logic for the settings related to the Remote Access port, used by Remote Cluster Security 2.0.
 
Basic service for accessing remote clusters via gateway nodes
Specifies how to behave when executing a request against a disconnected remote cluster.
This class encapsulates all remote cluster information to be rendered on _remote/info requests.
 
 
 
 
 
 
 
 
A remote exception for an action.
 
 
 
Removes corrupted Lucene index segments
 
 
 
 
 
 
 
 
An allocation strategy that only allows for a replica to be allocated when the primary is active.
 
 
Requests that are both ReplicationRequests (run on a shard's primary first, then the replica) and WriteRequest (modify documents on a shard), for example BulkShardRequest, IndexRequest, and DeleteRequest.
Replication group for a shard.
 
An encapsulation of an operation that is to be performed on the primary shard
 
An interface to encapsulate the metadata needed from replica shards when they respond to operations performed on them.
An encapsulation of an operation that will be executed on the replica shards, if present.
 
Requests that are run on a particular replica, first on the primary and then on the replicas like IndexRequest or TransportShardRefreshAction.
 
Base class for write action responses.
 
 
Task that tracks replication actions.
 
This class is responsible for tracking the replication group with its progress and safety markers (local and global checkpoints).
 
Represents the sequence number component of the primary context.
 
 
Health info regarding repository health for a node.
Determines the health of repositories on this node.
Contains metadata about registered snapshot repositories
 
Sets up classes for Snapshot/Restore.
Service responsible for maintaining and providing access to multiple repositories.
Task class that extracts the 'execute' part of the functionality for registering repositories.
Task class that extracts the 'execute' part of the functionality for unregistering repositories.
 
 
 
An interface for interacting with a repository in snapshot and restore.
An factory interface for constructing repositories.
 
 
 
Repository conflict exception
A class that represents the data in a repository, as captured in the repository's index blob.
A few details of an individual snapshot stored in the top-level index blob, so they are readily accessible without having to load the corresponding SnapshotInfo blob for each snapshot.
Generic repository exception
 
This indicator reports health for snapshot repositories.
Metadata about registered repository
Repository missing exception
Coordinates of an operation that modifies a repository, assuming that repository at a specific generation.
An extension point for Plugin implementations to add custom snapshot repositories.
Represents a shard snapshot in a repository.
 
 
Repository verification exception
 
 
 
 
Deprecated.
for removal
 
A validator that validates an request associated with indices before executing it.
Class encapsulating the explanation for a single AllocationCommand taken from the Deciders
Asynchronously performs a cluster reroute, updating any shard states and rebalancing the cluster if appropriate.
Context available to the rescore while it is running.
Since SearchContext no longer hold the states of search, the top K results (i.e., documents that will be rescored by query rescorers) need to be serialized/ deserialized between search phases.
Rescore phase of a search request, used to run potentially expensive scoring models against the top matching documents.
A query rescorer interface used to re-rank the Top-K results of a previously executed search.
The abstract base builder for instances of RescorerBuilder.
This Action is the reserved state save version of RestClusterUpdateSettingsAction
Base interface used for implementing 'operator mode' cluster state updates.
SPI service interface for supplying ReservedClusterStateHandler implementations to Elasticsearch from plugins/modules.
Controller class for storing and reserving a portion of the ClusterState
This ReservedClusterStateHandler is responsible for reserved state CRUD operations on composable index templates and component templates, e.g.
This Action is the reserved state save version of RestPutPipelineAction/RestDeletePipelineAction
This Action is the reserved state save version of RestPutRepositoryAction/RestDeleteRepositoryAction
An extension of the HandledTransportAction class, which wraps the doExecute call with a check for clashes with the reserved cluster state.
A holder for the cluster state to be saved and reserved and the version info
A metadata class to hold error information about errors encountered while applying a cluster state update for a given namespace.
Enum for kinds of errors we might encounter while processing reserved cluster state updates.
Cluster state update task that sets the error state of the reserved cluster state metadata.
Metadata class to hold a set of reserved keys in the cluster state, set by each ReservedClusterStateHandler.
Metadata class that contains information about reserved cluster state set through file based settings or by modules/plugins.
Builder class for ReservedStateMetadata
Generic task to update and reserve parts of the cluster state
Reserved cluster state update task executor
File settings metadata class that holds information about versioning and Elasticsearch version compatibility
Action for resetting feature states, mostly meaning system indices
Request for resetting feature state
Response to a feature state reset request.
An object with the name of a feature and a message indicating success or failure.
Success or failure enum.
 
An allocation decider that ensures we allocate the shards of a target index for resize operations next to the source primaries
This class calculates and verifies the number of shards of a target index after a resize operation.
 
 
 
Request class to shrink an index into a single shard
 
A RoutingChangesObserver that removes index settings used to resize indices (Clone/Split/Shrink) once all primaries are started.
The type of the resize operation
 
 
 
 
The result of calling ResolvedRepositories.resolve(ClusterState, String[]) to resolve a description of some snapshot repositories (from a path component of a request to the get-repositories or get-snapshots APIs) against the known repositories in the cluster state: the RepositoryMetadata for the extant repositories that match the description, together with a list of the parts of the description that failed to match any known repository.
 
 
 
 
 
 
 
 
 
Generic ResourceNotFoundException corresponding to the RestStatus.NOT_FOUND status code
Abstract resource watcher interface.
Generic resource watcher service Other elasticsearch services can register their resource watchers with this service using ResourceWatcherService.add(ResourceWatcher) method.
 
Collects statistics about queue size, response time, and service time of tasks executed on each node, making the EWMA of the values available to the coordinating node.
Struct-like class encapsulating a point-in-time snapshot of a particular node's statistics.
A failure to handle the response of a transaction action.
 
An action listener that requires RestActionListener.processResponse(Object) to be implemented and will automatically handle failures.
 
NodesResponseRestBuilderListener automatically translates any BaseNodesResponse (multi-node) response that is ToXContent-compatible into a RestResponse with the necessary header info (e.g., "cluster_name").
 
 
 
 
 
 
A REST action listener that builds an XContentBuilder based response.
{ "index" : { "_index" : "test", "_id" : "1" } { "type1" : { "field1" : "value1" } } { "delete" : { "_index" : "test", "_id" : "2" } } { "create" : { "_index" : "test", "_id" : "1" } { "type1" : { "field1" : "value1" } }
A Client that cancels tasks executed locally when the provided HttpChannel is closed before completion.
 
 
cat API class for handling get componentTemplate.
RestRecoveryAction provides information about the status of replica recovery in a string format, designed to be used at the command line.
A channel used to construct bytes / builder based outputs, and send responses.
A REST based action listener that requires the response to implement ChunkedToXContent and automatically builds an XContent based response.
Cleans up a repository
 
 
 
Clones indices from one snapshot into another snapshot in the same repository
 
 
Class handling cluster allocation explanation at the REST level
 
This response is specific to the REST client.
 
 
 
 
 
 
 
 
 
 
 
Creates a new snapshot
 
 
 
 
 
 
 
 
 
Unregisters a repository
Deletes a snapshot
 
 
 
Rest action for computing a score explanation for specific documents.
 
 
 
Cat API class to display information about the size of fielddata fields per node
 
 
 
 
The REST handler for get alias and head alias APIs.
 
 
 
 
Endpoint for getting the system feature upgrade status
 
 
The REST handler for get template and head template APIs.
The REST handler for get index and head index APIs.
 
 
Returns repository information
 
 
 
Returns information about snapshot
The REST handler for get source and head source APIs.
 
 
 
 
 
Handler for REST requests
 
 
A definition for an http header that should be copied to the ThreadContext when reading the request on the rest layer.
 
 
 
 
 
 
 
 
 
 
 
Wraps the execution of a RestHandler
The REST action for handling kNN searches.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
Information about successfully completed restore operation.
Meta data about restore processes that are currently executing
 
Restore metadata
Represents status of a restored shard
Shard restore process state
This AllocationDecider prevents shards that have failed to be restored from a snapshot to be allocated.
Service responsible for restoring snapshots
 
 
Restore snapshot action
Restore snapshot request
Restore snapshot request builder
Contains information about restores snapshot
 
 
 
Endpoint for triggering a system feature upgrade
 
 
 
 
 
 
Registers repositories
 
 
 
REST handler to report on index recoveries.
Same as RestChunkedToXContentListener but decrements the ref count on the response it receives by one after serialization of the response.
 
 
 
 
Cat API class to display information about snapshot repositories
 
 
 
 
Identifies an object that supplies a filter for the content of a RestRequest.
Rest handler for feature state reset requests
 
 
 
 
 
 
 
A REST enabled action listener that has a basic onFailure implementation, and requires sub classes to only implement RestResponseListener.buildResponse(Object).
Restores a snapshot
 
 
 
 
An action plugin that intercepts incoming the REST requests.
 
 
This is the REST endpoint for the simulate ingest API.
 
 
Cat API class to display information about snapshots
Returns status of currently running snapshot
 
 
 
 
 
 
This class parses the json request and translates it into a TermVectorsRequest.
 
A REST based action listener that requires the response to implement ToXContentObject and automatically builds an XContent based response.
 
 
 
 
 
 
 
 
Deduplicator for arbitrary keys and results that can be used to ensure a given action is only executed once at a time for a given request.
Represents a batch of operations sent from the primary to its replicas during the primary-replica resync.
 
A "shard history retention lease" (or "retention lease" for short) is conceptually a marker containing a retaining sequence number such that all operations with sequence number at least that retaining sequence number will be retained during merge operations (which could otherwise merge away operations that have been soft deleted).
This class holds all actions related to retention leases.
 
 
 
 
 
 
 
Replication action responsible for background syncing retention leases to replicas.
 
 
 
Represents a versioned collection of retention leases.
Represents retention lease stats.
Write action responsible for syncing retention leases to replicas.
 
 
 
Represents an action that is invoked periodically to sync retention leases to replica shards after some retention lease has been renewed or expired.
Represents an action that is invoked to sync retention leases to replica shards after a retention lease is added or removed on the primary.
A retriever represents an API element that returns an ordered list of top documents.
Defines a retriever parser that is able to parse RetrieverBuilders from XContent.
 
Each retriever is given its own NodeFeature so new retrievers can be added individually with additional functionality.
Encapsulates synchronous and asynchronous retry logic.
A action that will be retried on failure if RetryableAction.shouldRetry(Exception) returns true.
This file is forked from https://github.com/lz4/lz4-java.
A reverse nested aggregation.
 
 
 
A basic interface for rewriteable classes.
 
Contains the conditions that determine if an index can be rolled over or not.
Helps to build or create a mutation of rollover conditions
This class holds the configuration of the rollover conditions as they are defined in data stream lifecycle.
Parses and keeps track of the condition values during parsing
Class for holding Rollover related information within an index
Request class to swap index under an alias or increment data stream generation upon satisfying conditions
 
Response object for RolloverRequest API Note: there is a new class with the same name for the Java HLRC that uses a typeless format.
 
 
 
A container for a SecureString that can be rotated with a grace period for the secret that has been rotated out.
 
Basic ShardShuffler implementation that uses an AtomicInteger to generate seeds and uses a rotation to permute shards.
A strategy for rounding milliseconds since epoch.
 
 
A strategy for rounding milliseconds since epoch.
The RoutingAllocation keep the state of the current allocation of shards and holds the AllocationDeciders which are responsible for the current routing state.
 
Records changes made to RoutingNodes during an allocation round.
 
A base Exceptions for all exceptions thrown by routing related operations.
Class used to encapsulate a number of RerouteExplanation explanations.
 
 
 
 
A RoutingNode represents a cluster node associated with a single DiscoveryNode including all shards that are hosted on that nodes.
RoutingNodes represents a copy the routing information contained in the cluster state.
 
Records if changes were made to RoutingNodes during an allocation round.
Represents a global cluster-wide routing table for all indices including the version of the current routing state.
Builder for the routing table.
Runnable that prevents running its delegate more than once.
Definition of a runtime field that can be defined as part of the runtime section of the index mappings
 
Parser for a runtime field.
 
Information about the safe commit, for making decisions about recoveries.
An allocation decider that prevents multiple instances of the same shard to be allocated on the same node.
A filter aggregation that defines a single bucket to hold a sample of top-matching documents.
 
Aggregate on only the top-scoring docs on a shard.
 
 
This provides information around the current sampling context for aggregations
 
A builder for scaling executors.
Scheduler that allows to schedule one-shot and periodic commands.
This interface represents an object whose execution may be cancelled during runtime.
This class encapsulates the scheduling of a Runnable that needs to be repeated on a interval.
This subclass ensures to properly bubble up Throwable instances of both type Error and Exception thrown in submitted/scheduled tasks to the uncaught exception handler
A scheduled cancellable allow cancelling and reading the remaining delay of a scheduled task.
Thread-safe scheduling implementation that'll cancel an already scheduled job before rescheduling.
 
 
 
 
A scope is a level of visibility that instances provided by Guice may have.
 
See the EDSL examples at Binder.
Built-in scope implementations.
References a scope, either directly (as a scope instance), or indirectly (as a scope annotation).
 
 
Static method aliases for constructors of known ScoreFunctionBuilders.
Parses XContent into a ScoreFunctionBuilder.
 
A script used for adjusting the score on a per document basis.
A helper to take in an explanation from a script and turn it into an Explanation
A factory to construct stateful ScoreScript factories for a specific index.
A factory to construct ScoreScript instances.
 
 
 
 
 
 
 
 
 
 
 
 
A sort builder allowing to sort by score.
Script represents used-defined input that can be used to compile and execute a script from the ScriptService based on the ScriptType.
SortedBinaryDocValues implementation that reads values from a script.
Script cache and compilation rate limiter.
 
 
Takes a Script definition and returns a compiled script factory
The information necessary to compile and run a script.
 
 
 
Record object that holds stats information for the different script contexts in a node.
Script level doc values, the assumption is that any implementation will implement a getValue method.
 
 
 
 
 
 
 
 
 
 
 
 
Supplies values to different ScriptDocValues as we convert them to wrappers around DocValuesScriptFieldFactory.
SortingNumericDoubleValues implementation which is based on a script
A metric aggregation that computes both its final and intermediate states using scripts.
 
 
 
 
 
 
 
 
 
 
 
A Similarity implementation that allows scores to be scripted.
Statistics that are specific to a document.
Statistics that are specific to a given field.
Scoring factors that come from the query.
Statistics that are specific to a given term.
A script language implementation.
Exception from a scripting engine.
 
Contains utility methods for compiled scripts without impacting concrete script signatures
This interface is used to mark classes that create and possibly cache a specifically-named Field for a script.
 
 
 
 
 
The allowable types, languages and their corresponding contexts.
LongValues implementation which is based on a script
ScriptMetadata is used to store user-defined scripts as part of the ClusterState using only an id as the key.
A builder used to modify the currently stored scripts data held within the ClusterState.
 
Manages building ScriptService.
An additional extension point for Plugins that extends Elasticsearch's scripting functionality.
 
 
A function that uses a script to compute or influence the score of documents that match with the inner query or filter.
A query that uses a script to compute documents' scores.
A query that computes a document score based on the provided script
 
Collect settings related to script context and general caches.
Script sort builder allows to sort based on a custom script expression.
 
Record object that holds global statistics of the scripts in a node.
ScriptType represents the way a script is stored and retrieved from the ScriptService.
A scroll enables scrolling of search request.
A scrollable source of results.
 
An implementation of ScrollableHitSource.Hit that uses getters and setters.
A document returned as part of the response.
Response from each scroll batch.
A failure during search.
Wrapper around information that needs to stay around when scrolling.
 
 
 
 
A Query that only matches documents that are greater than the provided FieldDoc.
This terms enumeration initializes with a seek to a given term but excludes that term from any results.
This class encapsulates the state needed to execute a search.
The aggregation context that is part of the search context.
 
 
 
 
 
The context used to execute a search request on a shard.
A wrapper of search action listeners (search results) that unwraps the query result to get the piggybacked queue size and service time EWMA, adding those values to the coordinating nodes' ResponseCollectorService.
Intermediate serializable representation of a search ext section.
 
 
 
A single search hit.
 
Encapsulates the nested identity of a hit.
 
 
A predicate that checks whether an index pattern matches the current search shard target.
Provides a way to look up per-document values from docvalues, stored fields or _source
Sets up things that can be done at search time like queries, aggregations, and suggesters.
An listener for search, fetch and context events.
A Composite listener that multiplexes calls to each of the listeners methods.
 
 
 
 
 
This class is a base class for all search related results.
Plugin for extending search time behavior.
Specification for an Aggregation.
Context available during fetch phase construction.
Specification of GenericNamedWriteable classes that can be serialized/deserialized as generic objects in search results.
Specification for a PipelineAggregator.
Specification of custom Query.
Specification of custom QueryVectorBuilder.
 
Specification of custom RetrieverBuilder.
Specification of custom ScoreFunction.
Specification of search time behavior extension like a custom ScoreFunction.
Specification for a SearchExtBuilder which represents an additional section that can be parsed in a search request (within the ext element).
Specification of custom SignificanceHeuristic.
Specification for a Suggester.
 
Profile results from a shard for the search phase.
Profile results for all shards.
Profile results for the query phase run on all shards.
Profile results from a particular shard for all search phases.
An ActionListener for search requests that allows to track progress of the TransportSearchAction.
A listener that allows to track progress of the TransportSearchAction.
A request to execute search against one or more indices (or all).
A search action request builder.
A response of a search request.
Represents the search metadata about a particular cluster involved in a cross-cluster search.
Since the Cluster object is immutable, use this Builder class to create a new Cluster object using the "copyFrom" Cluster passed in and set only changed values.
Marks the status of a Cluster search involved in a Cross-Cluster search.
Holds info about the clusters that the search was executed on: how many in total, how many of them were successful and how many of them were skipped and further details in a Map of Cluster objects (when doing a cross-cluster search).
Merges multiple search responses into one.
 
Holds some sections that a search response is composed of (hits, aggs, suggestions etc.) during some steps of the search response building.
 
A search scroll action request builder.
 
Used to indicate which result object should be instantiated when creating a search context
A class that encapsulates the ShardId and the cluster alias of a shard used during the search action.
Extension of PlainShardIterator used in the search api, which also holds the OriginalIndices of the search request (useful especially with cross-cluster search, as each cluster has its own set of original indices) as well as the cluster alias.
Represents a group of nodes that a given ShardId is allocated on, along with information about whether this group might match the query or not.
A request to find the list of target shards that might match the query for the given target indices.
A response of SearchShardsRequest which contains the target shards grouped by ShardId
The target that the search request was executed on.
Task storing information about a currently running search shard request.
 
 
 
A search source builder allowing to easily build search source.
 
 
 
 
 
Task storing information about a currently running SearchRequest.
Specific instance of SearchException that indicates that a search timeout occurred.
 
An encapsulation of SearchService operations exposed through transport.
 
Search type represent the manner at which the search operation is executed.
Holds usage statistics for an incoming search request
Service responsible for holding search usage statistics, like the number of used search sections and queries.
Holds a snapshot of the search usage statistics.
 
 
A secure setting.
An accessor for settings which are securely stored.
A String implementations which allows clearing the underlying char array.
A pluggable provider of the list of seed hosts to use for discovery.
Helper object that allows to resolve a list of hosts to a list of transport addresses.
 
 
 
 
 
 
Mapper for the _seq_no field.
A sequence ID, which is made up of a sequence number (both the searchable and doc_value version of the field) and the primary term.
 
 
A utility class for handling sequence numbers.
 
A FilterLeafReader that exposes a StoredFieldsReader optimized for sequential access.
 
 
Arguments for running Elasticsearch.
 
This annotation is meant to be applied to RestHandler classes, and is used to determine which RestHandlers are available to requests at runtime in Serverless mode.
 
An approximate set membership datastructure that scales as more unique values are inserted.
 
A setting.
A key that allows for static pre and suffix.
 
Allows an affix setting to declare a dependency on another affix setting.
 
 
 
 
Allows a setting to declare a dependency on another setting being set.
 
Represents a validator for a setting.
An immutable settings implementation.
A builder allowing to put different settings and then Settings.Builder.build() an immutable settings implementation.
An implementation of SeedHostsProvider that reads hosts/ports from the "discovery.seed_hosts" node setting.
A generic failure to handle settings.
An SPI interface for registering Settings.
A class that allows to filter settings objects by simple regular expression patterns or full settings keys.
 
A module that binds the provided settings to the Settings interface.
Updates transient and persistent cluster state settings if there are any changes due to the update.
Utility that converts geometries into Lucene-compatible form for indexing in a shape or geo_shape field.
Enum representing the relationship between a Query / Filter Shape and indexed Shapes that will be used to determine if a Document should be matched or not
Represents the decision taken for the allocation of a single shard.
 
 
Internal class that maintains relevant shard bulk statistics / metrics.
A map between segment core cache keys and the shard that these segments belong to.
 
A SortField that first compares the shard index and then uses the document number (_doc) to tiebreak if the value is the same.
Shard level fetch base request.
Shard level fetch request used with search.
 
Records and provides field usage stats
 
The generation ID of a shard, used to name the shard-level index-$SHARD_GEN file that represents a BlobStoreIndexShardSnapshots instance.
Represents the current ShardGeneration for each shard in a repository.
 
 
Allows for shard level components to be injected with the shard id.
 
Allows to iterate over a set of shard instances (routing) within a shard id group.
This class contains the logic used to check the cluster-wide shard limit before shards are created and ensuring that the limit is updated correctly on setting updates, etc.
A Result object containing enough information to be used by external callers about the state of the cluster from the shard limits perspective.
A shard lock guarantees exclusive access to a shards data directory.
Exception used when the in-memory lock for a shard cannot be obtained
Class representing an (inclusive) range of long values in a field in a single shard.
 
 
An exception indicating that a failure occurred performing an operation on the shard.
 
 
 
A request that is sent to the promotable replicas of a primary shard
Tracks the portion of the request cache in use for a particular shard.
ShardRouting immutably encapsulates information about shard indexRoutings like id, state, version, etc.
 
 
Represents the current state of a ShardRouting as defined by the cluster.
 
A ShardsAllocator is the main entry point for shard allocation on nodes in the cluster.
This indicator reports health for shards.
 
This indicator reports health data about the shard capacity across the cluster.
 
Represents a failure to search on a specific shard.
Shard level request that represents a search.
 
 
A shuffler for shards whose primary goal is to balance load.
Allows to iterate over unrelated shards.
This AllocationDecider limits the number of shards per node on a per index or node-wide basis.
 
 
The details of a successful shard-level snapshot that are used to build the overall snapshot during finalization.
 
ShardSnapshotTaskRunner performs snapshotting tasks, prioritizing ShardSnapshotTaskRunner.ShardSnapshotTask over ShardSnapshotTaskRunner.FileSnapshotTask.
 
 
 
 
 
 
 
 
 
 
 
 
Internal validate request executed directly against a specific index shard.
 
 
 
A FilterMergePolicy that interleaves eldest and newest segments picked by MergePolicy.findForcedMerges(org.apache.lucene.index.SegmentInfos, int, java.util.Map<org.apache.lucene.index.SegmentCommitInfo, java.lang.Boolean>, org.apache.lucene.index.MergePolicy.MergeContext) and MergePolicy.findForcedDeletesMerges(org.apache.lucene.index.SegmentInfos, org.apache.lucene.index.MergePolicy.MergeContext).
A ShutdownAwarePlugin is a plugin that can be made aware of a shutdown.
 
 
 
 
Heuristic for that SignificantTerms uses to pick out significant terms.
 
Result of the running the significant terms aggregation on a numeric field.
 
Result of the running the significant terms aggregation on a String field.
 
An aggregation that collects significant terms in comparison to a background set.
 
 
 
 
A script used in significant terms heuristic scoring.
 
 
 
 
Wrapper around a Similarity and its name.
A script that is used to build ScriptedSimilarity instances.
 
 
A script that is used to compute scoring factors that are the same for all documents.
 
A basic batch executor implementation for tasks that can listen for acks themselves by providing a ClusterStateAckListener.
A basic implementation for batch executors that simply need to execute the tasks in the batch iteratively.
Simple diffable object with simple diffs implementation that sends the entire object if object has changed or nothing if object remained the same.
 
Transforms points and rectangles objects in WGS84 into mvt features.
Direct Subclass of Lucene's org.apache.lucene.search.vectorhighlight.SimpleFragmentsBuilder that corrects offsets for broken analysis chains.
 
MappedFieldType base impl for field types that are neither dates nor ranges.
SimpleQuery is a query parser that acts similar to a query_string query, but won't throw exceptions for any weird string syntax.
Flags for the XSimpleQueryString parser
Wrapper class for Lucene's SimpleQueryStringQueryParser that allows us to redefine different types of queries.
Class encapsulating the settings for the SimpleQueryString query, with their default values
 
A facade for SimpleFeatureFactory that converts it into FormatterFactory for use in GeoPointFieldMapper
 
This extends BulkRequest with support for providing substitute pipeline definitions.
Holds the end result of what a pipeline did to sample document provided via the simulate api.
 
Holds the result of what a pipeline did to a sample document via the simulate api, but instead of SimulateDocumentBaseResult this result class holds the intermediate result each processor did to the sample document.
This is an IndexResponse that is specifically for simulate requests.
 
 
Contains the information on what V2 templates would match a given index.
This is an implementation of IngestService that allows us to substitute pipeline definitions so that users can simulate ingest using pipelines that they define on the fly.
 
 
 
 
 
 
 
An action for simulating the complete composed settings of the specified index template name, or index template configuration
 
A single bucket aggregation
A bucket aggregator that doesn't create new buckets.
Holds the results of migrating a single feature.
StoredFieldVisitor that loads a single field value.
 
Contains data about a single node's shutdown readiness.
 
Describes the status of a component of shutdown.
Describes the type of node shutdown - permanent (REMOVE) or temporary (RESTART).
A very simple single object cache that allows non-blocking refresh calls triggered by expiry time.
 
A collector that groups documents based on field values and returns TopFieldGroups output.
Wraps an async action that consumes an ActionListener such that multiple invocations of SingleResultDeduplicator.execute(ActionListener) can share the result from a single call to the wrapped action.
 
 
A size based queue wrapping another blocking queue to provide (somewhat relaxed) capacity checks.
An aggregator capable of reporting bucket sizes in requested units.
 
 
A slice builder allowing to split a scroll in multiple partitions.
A SlicedInputStream is a logical concatenation one or more input streams.
An abstract Query that defines an hash function to partition the documents in multiple slices.
Interface for providing additional fields to the slow log from a plugin.
Deprecated.
 
Basic information about a snapshot - a SnapshotId and the repository that the snapshot belongs to.
 
A class that represents the snapshot deletions that are in progress in the cluster.
A class representing a snapshot deletion request entry in the cluster state.
 
Generic snapshot exception
 
Contains a list of files participating in a snapshot
 
SnapshotId - snapshot name + snapshot UUID
A (closeable) IndexCommit plus ref-counting to keep track of active users, and with the facility to drop the "main" initial ref early if the shard snapshot is aborted.
 
 
Represents snapshot status of all shards in the index
Information about a snapshot
 
This AllocationDecider prevents shards that are currently been snapshotted to be moved to other nodes.
Thrown on the attempt to execute an action that requires that no snapshot is in progress.
Thrown if requested snapshot doesn't exist
Thrown on the attempt to create a snapshot with a name that is taken by a snapshot in progress and a snapshot that already exists.
Represents a filter on snapshots by name, including some special values such as _all and _current, as supported by TransportGetSnapshotsAction.
Snapshot restore exception
Context holding the state for creating a shard snapshot via Repository.snapshotShard(SnapshotShardContext).
Stores information about failures that occurred during shard snapshotting process for serialization as part of SnapshotInfo.
 
This service runs on data nodes and controls currently running shard snapshots on these nodes.
Status of a snapshot shards
 
Meta data about snapshots that are currently executing
 
 
 
 
Sort key for snapshots e.g.
 
Service responsible for creating snapshots.
Snapshots status action
Get snapshot status request
Snapshots status request builder
Snapshot status response
Represents the state that a snapshot can be in
 
Status of a snapshot
 
Snapshot utilities
 
 
Represents a scored highlighted snippet.
 
 
A set of static factory methods for SortBuilders.
An enum representing the valid sorting options
A list of per-document binary values, sorted according to BytesRef.compareTo(BytesRef).
FieldData for floating point types backed by LeafReader.getSortedNumericDocValues(String)
 
 
Load _source fields from SortedNumericDocValues.
Clone of SortedNumericDocValues for double values.
FieldData for integral types backed by LeafReader.getSortedNumericDocValues(String)
 
A small helper class that can be configured to load nanosecond field data either in nanosecond resolution retaining the original values or in millisecond resolution converting the nanosecond values to milliseconds
 
 
An LeafFieldData implementation that uses Lucene SortedSetDocValues.
 
Load _source fields from SortedSetDocValues.
 
 
 
Base class for building SortedBinaryDocValues instances based on unsorted content.
Base class for building SortedNumericDocValues instances based on unsorted content.
Base class for building SortedNumericDoubleValues instances based on unsorted content.
Elasticsearch supports sorting by array or multi-valued fields.
A sorting order.
A Comparable, DocValueFormat aware wrapper around a sort value.
The source of a document.
Load _source into blocks.
 
 
 
Implements source filtering based on a list of included and excluded fields.
Loads source _source during a GET or _search.
Loads _source from some segment.
Load _source from doc values.
Load a field for SourceLoader.Synthetic.
Loads doc values for a field.
Sync for stored field values.
PrioritizedRunnable that also has a source string
Provides access to the calling line of code.
Provides access to the Source of a document
 
 
 
An implementation of ValueFetcher that knows how to extract values from the document source.
 
 
 
Marker interface to indicate these doc values are generated on-the-fly from a ValueFetcher.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
A span rewrite method that extracts the first maxExpansions terms that match the MultiTermQuery in the terms dictionary.
Builder for SpanContainingQuery.
 
A SpanQuery that matches no documents.
Query that allows wrapping a MultiTermQueryBuilder (one of wildcard, fuzzy, prefix, term, range or regexp query) as a SpanQueryBuilder so it can be nested.
Matches spans which are near one another.
SpanGapQueryBuilder enables gaps in a SpanNearQuery.
 
Span query that matches the union of its clauses.
Marker interface for a specific type of QueryBuilder that allows to build span queries.
 
A Span Query that matches documents containing a term.
Builder for SpanWithinQuery.
A FieldMapper that exposes Lucene's FeatureField as a sparse vector of features.
 
 
An aggregation that computes a bounding box in which all documents of the current bucket are.
To facilitate maximizing the use of common code between GeoPoint and projected CRS we introduced this ElasticPoint as an interface of commonality.
 
Elasticsearch-specific permission to check before entering AccessController.doPrivileged() blocks.
Utility functions to transforms WGS84 coordinates into spherical mercator.
Helper class for loading SPI classes from classpath (META-INF files).
A utility class containing methods that wraps the Stable plugin API with the old plugin api.
This indicator reports the health of master stability.
A registry of classes declared by plugins as named components.
Creates stack trace elements for members.
A class that represents a stale shard copy.
 
A standard retriever is used to represent anything that is a query along with some elements to specify parameters for that query.
 
Represents the action of requesting a join vote (see Join) from a node.
This action can be used to add the record for the persistent action to the cluster state.
 
 
Represents a request for starting a peer recovery.
A wrapper for exceptions occurring during startup.
 
 
 
Statistics over a set of values (either aggregated over field data or scripts)
 
Statistics over a set of buckets
 
 
 
Class that represents the Health status for a node as determined by NodeHealthService and provides additional info explaining the reasons
 
 
Enables simple performance monitoring.
Simple stop watch, allowing for timing of a number of tasks, exposing total running time and running time for each named task.
Inner class to hold data about one task executed within the stop watch.
A Store provides plain access to files written by an elasticsearch index shard.
Represents a snapshot of the current directory build from the latest Lucene commit.
A listener that is executed once the store is closed and all references to it are released
A class representing the diff between a recovery source and recovery target
Per segment values for a field loaded from stored fields.
Generates a LeafStoredFieldLoader for a given lucene segment to load stored fields.
Context used to fetch the stored_fields.
Per segment values for a field loaded from stored fields exposing SortedBinaryDocValues.
Process stored fields loaded from a HitContext into DocumentFields
Defines which stored fields need to be loaded during a fetch
StoredScriptSource represents user-defined parameters for a script saved in the ClusterState.
Value fetcher that loads from stored values.
 
 
This package private utility class encapsulates the logic to recover an index shard from either an existing index on disk or from a snapshot in a repository.
 
A stream from this node to another node.
A stream from another node to this node.
Simple utility methods for file and stream copying.
 
 
 
 
 
 
Base class for MappedFieldType implementations that use the same representation for internal index terms as the external representation so that partial matching queries such as prefix, wildcard and fuzzy queries can be implemented.
A cache in front of Java's string interning.
 
 
An aggregator that finds "rare" string values (e.g.
String utilities.
 
 
 
 
 
 
 
 
 
 
 
 
 
A factory to construct stateful StringSortScript factories for a specific index.
A factory to construct StringSortScript instances.
 
Result of the TermsAggregator when the field is a String.
 
Adapts a terms aggregation into a filters aggregation.
A "stupid-backoff" smoothing model similar to Katz's Backoff.
An ActionListener to which other ActionListener instances can subscribe, such that when this listener is completed it fans-out its result to the subscribed listeners.
 
SearchQueryBuilder is a wrapper class for containing all the information required to perform a single search query as part of a series of multiple queries for features like ranking.
Top level suggest result, containing the result for each suggestion.
The suggestion responses corresponding with the suggestions in the request.
Represents a part from the suggest text with suggested options.
Contains the suggested text with its document frequency and score.
Defines how to perform suggesting.
A static factory for building suggester lookup queries
 
 
Base class for the different suggestion implementations.
 
 
Suggest phase of a search request, used to collect suggestions
 
 
 
 
 
 
Annotation to suppress logging usage checks errors inside a whole class or a method.
 
 
Manages synonyms performing operations on the system index
 
 
Statistics about an index feature.
 
Describes a DataStream that is reserved for use by a system feature.
 
Uses a pattern string to define a protected space for indices belonging to a system feature, and, if needed, provides metadata for managing indices that match the pattern.
Provides a fluent API for building a SystemIndexDescriptor.
The version of the mapping, which should be stored as an int in a mapping metadata field.
The specific type of system index that this descriptor represents.
This class ensures that all system indices have up-to-date mappings, provided those indices can be automatically managed.
A service responsible for updating the metadata used by system indices.
Starts the process of migrating system indices.
The params used to initialize SystemIndexMigrator when it's initially kicked off.
Contains the current state of system index migration progress.
This is where the logic to actually perform the migration lives - SystemIndexMigrator.run(SystemIndexMigrationTaskState) will be invoked when the migration process is started, plus any time the node running the migration drops from the cluster/crashes/etc.
A Plugin that can store state in protected Elasticsearch indices or data streams.
Provides information about system-owned indices and data streams for Elasticsearch and Elasticsearch plugins.
Describes an Elasticsearch system feature that keeps state in protected indices and data streams.
Type for the handler that's invoked when all of a feature's system indices have been migrated.
Type for the handler that's invoked prior to migrating a Feature's system indices.
In a future release, these access levels will be used to allow or deny requests for system resources.
 
 
Current task information
Report of the internal status of a task.
An interface for a request that can be used to register a task manager task
 
A generic exception that can be thrown by a task when it's cancelled by the task manager API
An extension to thread pool executor, which tracks statistics for the task execution time.
Information about a currently running task and all its subtasks.
 
Task id that consists of node id and id of the task on the node
Information about a currently running task.
Task Manager service for keeping track of currently running tasks on the nodes
Information about task operation failures The class is final due to serialization limitations
Information about a running task or a task that stored its result.
Service that can store task results.
 
Builder for task-based requests
 
 
This is a tcp channel representing a single channel connection to another node.
 
 
This is a tcp channel representing a server channel listening for new connections.
 
A helper exception to mark an incoming connection as potentially being HTTP so an appropriate error code can be returned
Representation of a transport profile settings for a transport.profiles.$profilename.*
 
Indicates which implementation is used in TDigestState.
Decorates TDigest with custom serialization.
 
 
 
A template consists of optional settings, mappings, alias or lifecycle configuration for an index or data stream, however, it is entirely independent of an index or data stream.
A string template rendered as a script.
 
Upgrades Templates on behalf of installed Plugins when a node joins the cluster
Base MappedFieldType implementation for a field that is indexed with the inverted index.
Interface for termination handlers, which are called after Elasticsearch receives a signal from the OS indicating it should shut down but before core services are stopped.
SPI service interface for providing hooks to handle graceful termination.
A Query that matches documents containing a term.
A terms aggregation.
A bucket that is associated with a single term
 
 
 
This class provides bucket thresholds configuration, but can be used ensure default value immutability
 
 
Encapsulates the parameters needed to fetch terms.
A filter for a field based on several terms matching on any of them.
Store terms as a BytesReference.
 
 
A factory to construct stateful TermsSetQueryScript factories for a specific index.
A factory to construct TermsSetQueryScript instances.
A SliceQuery that uses the terms dictionary of a field to do the slicing.
 
The suggestion responses corresponding with the suggestions in the request.
Represents a part from the suggest text with suggested options.
Contains the suggested text with its document frequency and score.
 
 
Defines the actual suggest command.
An enum representing the valid string edit distance algorithms for determining suggestions.
An enum representing the valid suggest modes.
A CompositeValuesSourceBuilder that builds a ValuesSource from a Script or a field name.
 
 
This class represents the result of a TermVectorsRequest.
 
 
Request returning the term vector (doc frequency, positions, offsets) for a document.
 
 
The builder class for a term vector request.
 
 
Both String and BytesReference representation of the text.
 
A FieldMapper for full-text fields.
 
 
 
 
Utility functions for text mapper parameters
 
Encapsulates information about how to perform text searches over a field
What sort of term vectors are available
A ThreadContext is a map of string headers and a transient map of keyed objects that are associated with a thread.
 
An action listener that wraps another action listener and dispatches its completion to an executor.
Manages all the Java thread pools we create.
The settings used to create a Java ExecutorService thread pool.
List of names that identify Java thread pools that are created in ThreadPool().
 
 
 
 
 
 
ThrottlingAllocationDecider controls the recovery process per node in the cluster.
An exception to cluster state listener that allows for timeouts and for post added notifications.
Helps measure how much time is spent running some methods.
A response class representing a snapshot of a TimeSeriesCounter at a point in time.
Provides a counter with a history of 5m/15m/24h.
 
Mapper for _tsid field included generated when the index is organized into time series.
 
 
 
An IndexSearcher wrapper that executes the searches in time-series indices by traversing them by tsid and timestamp TODO: Convert it to use index sort instead of hard-coded tsid and timestamp values
Utility functions for time series related mapper parameters
There are various types of metric used in time-series aggregations and downsampling.
Mapper for the _ts_routing_hash field.
Holds ValuesSourceType implementations for time series fields
Bounds for the @timestamp field on this index.
Tracks the mapping of the @timestamp field of immutable indices that expose their timestamp range in their index metadata.
 
SchedulerEngine.Schedule implementation wrapping a TimeValue interval that'll compute the next scheduled execution time according to the configured interval.
Provides a contract writing the full contents of an object as well as a mechanism for filtering some fields from the response.
 
 
 
Merges many buckets into the "top" buckets as sorted by BucketOrder.
Wrapper around a TopDocs instance and the maximum score.
Represents hits returned by SinglePassGroupingCollector.getTopGroups(int)}.
Accumulation of the most relevant hits for a bucket this aggregation falls into.
 
A helper class that lets LeafFieldData.getScriptFieldFactory(java.lang.String) translate from raw doc values into the field provider that the scripting API requires.
Helps with toString() methods.
A class that can be traced using the telemetry tracing API
Required methods from ThreadContext for Tracer
Pattern converter to format the trace id provided in the traceparent header into JSON fields trace.id.
Represents a distributed tracing system that keeps track of the start and end of various activities in the cluster.
 
Processor to be used within Simulate API to keep track of processors executed in pipeline.
 
A ClusterState wrapper used by the ReservedClusterStateService to pass the current state as well as previous keys set by an ReservedClusterStateHandler to each transform step of the cluster state update.
A Translog is a per index shard component that records all non-committed index operations in a durable manner.
 
 
 
 
 
A generic interface representing an operation performed on the transaction log.
 
A snapshot of the transaction log, allows to iterate over all the transaction log operations.
References a transaction log generation
 
 
 
 
an immutable translog filereader
 
 
 
A unidirectional connection to a DiscoveryNode
 
This class represents a response context that encapsulates the actual response handler, the action.
This class is a registry that allows
 
TransportActionProxy allows an arbitrary action to be executed on a defined target node while the initial request is sent to a second node that acts as a request proxy to the target node.
 
 
 
Adds a single index level block to a given set of indices.
A transport address used for IP socket address (wraps InetSocketAddress).
 
 
Transport action used to execute analyze requests
 
 
 
Abstraction for transporting aggregated shard-level operations in a single request (NodeRequest) per-node and executing the shard-level operations serially on the receiving node.
Can be used for implementations of shardOperation for which there is no shard-level return value.
 
Base class for requests that should be executed on all shards of an index or several indices.
 
Groups bulk request items by shard, optionally creating non-existent indices and delegates to TransportShardBulkAction for shard-level bulk execution
Transport action that can be used to cancel currently running cancellable tasks.
A transport channel allows to send a response to a request on the channel.
Repository cleanup action for repository implementations based on BlobStoreRepository.
Indices clear cache action.
 
 
Transport action for the clone snapshot operation.
Close index action
 
The TransportClusterAllocationExplainAction is responsible for actually executing the explanation of a shard's allocation on the master node in the cluster.
 
 
 
 
 
 
 
 
 
A listener interface that allows to react on transport events.
Create index action.
Transport action for create snapshot operation
 
Deprecated.
 
 
 
 
Implements the deletion of a dangling index.
 
 
 
 
Delete index action.
Delete index action.
Transport action for unregister repository operation
Transport action for delete snapshot operation
 
 
 
 
Explain transport action.
 
 
 
Finds a specified dangling index by its UUID, searching across all nodes.
Flush Action.
ForceMerge index/indices action.
Performs the get operation.
NB prior to 8.12 this was a TransportMasterNodeReadAction so for BwC it must be registered with the TransportService (i.e.
 
 
 
 
 
 
 
Transport class for the get feature upgrade status action
 
Transport action used to retrieve the mappings related to fields that belong to a specific index
 
 
 
Get index action.
 
 
Transport action for get repositories operation
 
 
 
 
Transport Action for get snapshots operation
 
 
 
 
ActionType to get a single task.
A base class for operations that need to be performed on the health node.
Implements the import of a dangling index.
Deprecated.
Add/remove aliases action
 
Transport action that reads the cluster state for shards with the requested criteria (see ClusterHealthStatus) of specific indices and fetches store information from all the nodes using TransportNodesListGatewayStartedShards
 
 
 
This interface allows plugins to intercept requests on both the sender and the receiver side.
Implements the listing of all dangling indices.
 
Analogue of TransportMasterNodeReadAction except that it runs on the local node rather than delegating to the master.
 
A base class for operations that needs to be performed on the master node.
A base class for read operations that needs to be performed on the master node.
 
 
 
 
 
 
 
 
 
 
This transport action is used to fetch the shard version from each node during primary allocation in GatewayAllocator.
 
 
 
 
 
 
 
 
 
 
 
Transport action that collects snapshot shard statuses from data nodes
 
 
 
 
 
 
 
 
Exception indicating that the TransportService received a request before it was ready to handle it, so the request should be rejected and the connection closed.
Open index action
 
 
Transport action for post feature upgrade action
 
Given a set of shard IDs, checks which of those shards have a matching directory in the local data path.
 
 
A request for putting a single index template into the cluster state
Put index template action.
Put mapping action.
Transport action for register repository operation
 
 
 
Transport action for shard recovery operation.
Refresh action.
Indices clear cache action.
 
Base class for requests that should be executed on a primary copy followed by replica copies.
 
a wrapper class to encapsulate a request when being sent to a specific allocation id
 
 
 
 
 
 
 
 
 
Transport action for cleaning up feature index state.
Main class to initiate resizing (shrink / split) an index into a new index
 
 
 
 
Implementation of TransportResponseHandler that handles the empty response TransportResponse.Empty.
Transport action for restore snapshot operation
 
Main class to swap the index pointed to by an alias, given some conditions
 
 
Search operations need two clocks.
 
 
An internal search shards API performs the can_match phase and returns target shards of indices that might match a query.
 
 
This handler wrapper ensures that the response thread executes with the correct thread context.
 
 
Performs shard-level bulk (index, delete or update) operations
 
 
 
 
 
 
 
 
 
Handles simulating an index template either by name (looking it up in the cluster state), or by a provided template configuration
Deprecated.
A base class for operations that need to perform a read operation on a single shard copy.
 
 
 
 
The base class for transport actions that are interacting with currently running tasks.
Performs the get operation.
 
 
 
 
 
Transport action for verifying repository operation
 
 
Action used to verify whether shards have properly applied a given index block, and are no longer executing any operations in violation of that block.
 
Represents the version of the wire protocol used to communicate between a pair of ES nodes.
Transport version is used to coordinate compatible wire protocol communication between nodes, at a fine-grained level.
This fixes up the transport version from pre-8.8.0 cluster state that was inferred as the minimum possible, due to the master node not understanding cluster state with transport versions added in 8.8.0.
Base class for transport actions that modify data in some shard like index, delete, and shardBulk.
Result of taking the action on the primary.
Result of taking the action on the replica.
Visitor for triangle interval tree.
Visitor for triangle interval tree which decodes the coordinates
This is a tree-writer that serializes a list of ShapeField.DecodedTriangle as an interval tree into a byte array.
Represents an operation that accepts three arguments and returns no result.
Represents a function that accepts three arguments and produces a result.
 
 
 
A mapper for the _id field that builds the _id from the _tsid and @timestamp.
A Collector extension that allows to run a post-collection phase.
Converts constant string values to a different type.
Represents a generic type T.
 
 
 
Static methods for working with types.
This classloader will load classes from non-modularized sets of jars.
 
Holds additional information as to why the shard is in unassigned state.
Captures the status of an unsuccessful allocation attempt for the shard, causing it to remain in the unassigned state.
Reason why the shard is in unassigned state.
 
 
This class represents a repository that could not be initialized due to unknown type.
Class to fetch all unmapped fields from a Source that match a set of patterns Takes a set of mapped fields to ignore when matching, which should include any nested mappers.
Result of the RareTerms aggregation when the field is unmapped.
 
 
Result of the running the significant terms aggregation on an unmapped field.
Concrete type that can't be built because Java needs a concrete type so InternalTerms.Bucket can have a self type but UnmappedTerms doesn't ever need to build it because it never returns any buckets.
Result of the TermsAggregator when the field is unmapped.
Concrete type that can't be built because Java needs a concrete type so InternalTerms.Bucket can have a self type but UnmappedTerms doesn't ever need to build it because it never returns any buckets.
 
 
Thrown when executing an aggregation on a time series index field whose type is not supported.
An untargetted binding.
 
 
Metadata for the UpdateByQueryMetadata context.
Request to update some documents.
 
A script used by the update by query api
 
Source and metadata for update (as opposed to insert via upsert) in the Update context.
 
 
 
This action allows a node to send their health info to the selected health node.
 
 
 
Helper for translating an update request to an index, delete request or update response.
Field names used to populate the script context
 
Internal request that is used to send changes in snapshot status to master
The update context has read-only metadata: _index, _id, _version, _routing, _type (always '_doc'), _now (timestamp in millis) and read-write op that may be one of 'noop' or 'none' (legacy), 'index', 'delete' or null
 
 
 
 
 
 
Builder class for UpdateResponse.
A script used in the update API
 
Cluster state update request that allows to update settings for some indices
Request for an update index settings action
Builder for an update index settings request
Metadata for insert via upsert in the Update context
URI Pattern matcher The pattern is URI in which authority, path, query and fragment can be replace with simple pattern.
A service to monitor usage of Elasticsearch features.
 
 
 
 
A request to validate a specific query.
 
The response of the validate action.
Encapsulates an accumulation of validation errors
An get that holds the number of values that the current document set has for a specific field.
 
A field data based aggregator that counts the number of values a specific field has within the aggregation context.
A helper class for fetching field values during the FetchFieldsPhase.
Holds a value.
 
 
 
 
 
A unified interface to different ways of getting input data for Aggregators like DocValues from Lucene or script output.
ValuesSource for fields who's values are best thought of as byte arrays without any other meaning like keyword or ip.
 
ValuesSource implementation for stand alone scripts returning a Bytes value
Specialization of ValuesSource.Bytes who's underlying storage de-duplicates its bytes by storing them in a per-leaf sorted lookup table.
 
ValuesSource subclass for Bytes fields with a Value Script applied
ValuesSource for fields who's values are best thought of as points on a globe.
 
ValuesSource for fields who's values are best thought of as numbers.
 
ValuesSource implementation for stand alone scripts returning a Numeric value
ValuesSource subclass for Numeric fields with a Value Script applied
ValuesSource for fields who's values are best thought of as ranges of numbers, dates, or IP addresses.
 
 
 
 
 
A configuration that tells aggregations how to retrieve data from the index in order to run a specific aggregation.
ValuesSourceRegistry holds the mapping from ValuesSourceTypes to functions for building aggregation components.
 
 
ValuesSourceType represents a collection of fields that share a common set of operations, for example all numeric fields.
Deprecated.
We are in the process of replacing this class with ValuesSourceType, so new uses or entries to the enum are discouraged.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
This query provides a simple post-filter for the provided Query.
 
 
 
Verify repository action
Verify repository request.
Builder for verify repository request
Verify repository response
 
 
This StreamOutput writes nowhere.
 
 
 
A NamedWriteable that has a minimum version associated with it.
Allows plugging in current version elements.
Mapper for the _version field.
Indicates a class that represents a version id of some kind
Represents the versions of various aspects of an Elasticsearch node.
 
 
Utility class to resolve the Lucene doc ID, version, seqNo and primaryTerms for a given uid.
Wraps an LeafReaderContext, a doc ID relative to the context doc base and a seqNo.
Wraps an LeafReaderContext, a doc ID relative to the context doc base and a version.
VersionStats calculates statistics for index creation versions mapped to the number of indices, primary shards, and size of primary shards on disk.
 
 
 
 
A query that multiplies the weight to the score.
An aggregation that computes the average of the values in the current bucket.
 
 
Implements the wildcard search query.
 
 
Task behavior for BulkByScrollTask that does the actual work of querying and indexing
 
A Query builder which allows building a query given JSON string or binary data provided as input.
A wrapping processor is one that encapsulates an inner processor, or a processor that the wrapped processor acts upon.
Implementers can be written to a StreamOutput and read from a StreamInput.
Reference to a method that can read some object from a stream.
Reference to a method that can write some object to a StreamOutput.
Simple wrapper around ZoneId so that it can be written to XContent
 
 
 
 
Interface implemented by requests that modify the documents in an index like IndexRequest, UpdateRequest, and BulkRequest.
 
 
Interface implemented by responses for actions that modify the documents in an index like IndexResponse, UpdateResponse, and BulkResponse.
Abstract base class for scripts that write documents.
This exception is thrown when there is a problem of writing state to disk.
SPI extensions for Elasticsearch-specific classes (like the Lucene or Joda dependency classes) that need to be encoded by XContentBuilder in a specific way.
 
A FunctionalInterface that can be used in order to customize map merges.
 
A set of static methods to get XContentParser.Token from XContentParser while checking for their types and throw ParsingException if needed.
Generate "more like this" similarity queries.
 
StoredFieldsFormat that compresses blocks of data using ZStandard.