Each shard is built with a list of queries to run and
tags to add to the queries (List<QueryAndTags>). Some examples:
-
For queries like
FROM foowe'll use a one element list containingmatch_all, []. It loads all documents in the index and append no extra fields to the loaded documents. -
For queries like
FROM foo | WHERE a > 10we'll use a one element list containing+single_value(a) +(a > 10), []. It loads all documents whereais single valued and greater than 10. -
For queries like
FROM foo | STATS MAX(b) BY ROUND_TO(a, 0, 100)we'll use a two element list containing+single_value(a) +(a < 100), [0]+single_value(a) +(a >= 100), [100]
ais single valued and adds a constant0to the documents wherea < 100and the constant100to the documents wherea >= 100.
IMPORTANT: Runners make no effort to deduplicate the results from multiple queries. If you need to only see each document one time then make sure the queries are mutually exclusive.
-
Nested Class Summary
Nested ClassesModifier and TypeClassDescriptionstatic enumStrategy used to partition each shard into slices.static final recordQuery to run and tags to add to the results. -
Field Summary
FieldsModifier and TypeFieldDescriptionstatic final intstatic final int -
Method Summary
Modifier and TypeMethodDescriptionstatic LuceneSliceQueuecreate(List<? extends ShardContext> contexts, Function<ShardContext, List<LuceneSliceQueue.QueryAndTags>> queryFunction, DataPartitioning dataPartitioning, Function<org.apache.lucene.search.Query, LuceneSliceQueue.PartitioningStrategy> autoStrategy, int taskConcurrency, Function<ShardContext, org.apache.lucene.search.ScoreMode> scoreModeFunction) nextSlice(LuceneSlice prev) Retrieves the next availableLuceneSlicefor processing.Strategy used to partition each shard in this queue.int
-
Field Details
-
MAX_DOCS_PER_SLICE
public static final int MAX_DOCS_PER_SLICE- See Also:
-
MAX_SEGMENTS_PER_SLICE
public static final int MAX_SEGMENTS_PER_SLICE- See Also:
-
-
Method Details
-
nextSlice
Retrieves the next availableLuceneSlicefor processing.This method implements a three-tiered strategy to minimize the overhead of switching between segments: 1. If a previous slice is provided, it first attempts to return the next sequential slice. This keeps a thread working on the same segments, minimizing the overhead of segment switching. 2. If affinity fails, it returns a slice from the
sliceHeadsqueue, which is an entry point for a new, independent group of segments, allowing the calling Driver to work on a fresh set of segments. 3. If thesliceHeadsqueue is exhausted, it "steals" a slice from thestealableSlicesqueue. This fallback ensures all threads remain utilized.- Parameters:
prev- the previously returnedLuceneSlice, ornullif starting- Returns:
- the next available
LuceneSlice, ornullif exhausted
-
totalSlices
public int totalSlices() -
partitioningStrategies
Strategy used to partition each shard in this queue. -
remainingShardsIdentifiers
-
create
public static LuceneSliceQueue create(List<? extends ShardContext> contexts, Function<ShardContext, List<LuceneSliceQueue.QueryAndTags>> queryFunction, DataPartitioning dataPartitioning, Function<org.apache.lucene.search.Query, LuceneSliceQueue.PartitioningStrategy> autoStrategy, int taskConcurrency, Function<ShardContext, org.apache.lucene.search.ScoreMode> scoreModeFunction)
-