What are you trying to do?
207 built-in operators · press / to ask
Every built-in operator is a cascade you can read, extend, or compose. Start with what you’re trying to do, not what it’s called.
Identify target audience via zero-shot NLI
LLM-backed audience identification (escape hatch for AUDIENCE)
Assess authenticity of content via zero-shot NLI
LLM-backed authenticity assessment (escape hatch for AUTHENTICITY)
Classify text into user-specified buckets via zero-shot NLI
LLM-backed bucketing (escape hatch for BUCKET)
Multi-branch semantic classification via specialist zoo NLI
LLM-backed multi-branch semantic classification
Categorize text via zero-shot NLI with default or custom labels
LLM-backed categorization (escape hatch for CATEGORY)
Classify single text into one of the provided topics (zero-shot NLI)
Classify a text collection into one category (embedding majority vote)
LLM-backed collection classification (escape hatch for CLASSIFY)
LLM-backed single-text classification (escape hatch for CLASSIFY_SINGLE)
Assess complexity level for grouping (zero-shot NLI)
LLM-backed complexity assessment (escape hatch for COMPLEXITY)
Analyze credibility/reliability of text descriptions (zero-shot NLI)
LLM-backed credibility assessment (escape hatch for CREDIBILITY)
Detect subject matter domain for grouping (zero-shot NLI)
LLM-backed subject-matter domain detection (escape hatch for DOMAIN)
Predict engagement type/level via zero-shot NLI
LLM-backed engagement-level prediction (escape hatch for ENGAGEMENT)
Classify evidence type and strength
Detect formality level for grouping (zero-shot NLI)
LLM-backed formality detection (escape hatch for FORMALITY)
Built-in discriminator for CHOOSE stages without explicit BY clause.
Detect the semantic type of a value (email, phone, date, etc.)
Classify the communicative intent of text (zero-shot NLI)
LLM-backed intent classification (escape hatch for INTENT)
Detect language for grouping (xlm-roberta-lid)
LLM-backed language detection (escape hatch for LANGUAGE)
Check if value looks like a type (fuzzy)
LLM-backed looks_like (escape hatch for LOOKS_LIKE)
Evaluate whether a plain-English statement is true or false
Identify narrative frame via zero-shot NLI
LLM-backed narrative-frame identification (escape hatch for NARRATIVE)
Assess data quality (0.0-1.0)
LLM-backed data quality assessment (0.0-1.0)
Dimension-shaped sentiment analysis for GROUP BY
LLM-backed dimensional sentiment analysis (escape hatch for SENTIMENT)
Identify stance on a topic via zero-shot NLI (topic required)
LLM-backed stance detection on a topic (escape hatch for STANCE)
Semantic pattern matching via specialist zoo NLI (MATCH/YIELD syntax)
LLM-backed semantic pattern matching
Extract timeframe dimension via zero-shot NLI
LLM-backed timeframe extraction (escape hatch for TIMEFRAME)
Extract topics from text collection and assign each text to a topic
LLM-only topic extraction + assignment (escape hatch for TOPICS)
Assess toxicity level for grouping (toxic-bert)
LLM-backed toxicity assessment (escape hatch for TOXICITY)
Extract vibe categories from a text collection and assign each text to one
LLM-backed vibe extraction and assignment (escape hatch for VIBES)
Predict viral potential via zero-shot NLI
LLM-backed virality prediction (escape hatch for VIRALITY)
Crawl a website and extract structured data from each page (via Firecrawl)
Extract specific information from unstructured text (zero-shot NER)
LLM-backed extraction (escape hatch for EXTRACTS)
Extract structured fields from text per a user-supplied schema
Merge multiple timelines into unified chronological sequence
Extract information from text using natural-language instructions
Parse, validate, or transform patterned strings using plain-English instructions
Parse US address into JSON with fields: street_number, street_name, unit, city, state, zip, country, formatted
LLM-backed address parsing (escape hatch for PARSE_ADDRESS)
Parse date from any format into JSON with fields: year, month, day, iso, formatted, day_of_week, is_valid
LLM-backed date parsing (escape hatch for PARSE_DATE)
PDF → structured JSON (pypdf + Donut OCR fallback)
Parse raw email into structured JSON (from, to, cc, date, subject, body)
LLM-backed email parsing (escape hatch for PARSE_EMAIL)
Parse person name into JSON with fields: prefix, first, middle, last, suffix, nickname, formatted
LLM-backed name parsing (escape hatch for PARSE_NAME)
Parse phone number into JSON with fields: country_code, area_code, e164, national, international, is_valid, type
LLM-backed phone parsing (escape hatch for PARSE_PHONE)
Extract structured data from text using a predefined type schema
Re-crawl a stored web-table using its saved configuration
Extract (head, type, tail) relation triples from text
Extract (subject, predicate, object, evidence) knowledge graph triples with source context
Fetch an RSS/Atom feed and return entries as a table
Fetch a URL and return its content as clean markdown (via Firecrawl)
Extract a value from JSON using a plain-English path description
Split a value into components without hardcoding a delimiter
Extract multiple distinct values from a compound/messy field
Extract chronologically ordered events from text
Extract (subject, predicate, object) knowledge graph triples from text
Expand a compound value into separate elements (handles mixed delimiters)
AI web research agent — no starting URL required (via Firecrawl)
Extract structured data from multiple pages using LLM-powered extraction
Extract structured data from a URL as a table (LLM-powered)
How strongly text supports a specific message or stance (0.0-1.0)
Check if an image semantically matches a text query (cross-modal)
Cross-modal cosine similarity between an image and a text query
Fuzzy cross-match two string arrays via bge-m3 embeddings
Checks whether two values match under a relationship
Returns TRUE if text semantically matches the criterion (cross-encoder)
LLM-backed boolean match (escape hatch for MEANS when encoder-based matching is insufficient)
Cosine similarity between two texts (0.0 to 1.0) via bge-m3
Check if two words sound similar (phonetically)
LLM-backed phonetic similarity (escape hatch for SOUNDS_LIKE)
Extract hidden assumptions from an argument or claim
Returns TRUE if text_a contradicts text_b (3-class NLI)
LLM-backed contradiction check (escape hatch for CONTRADICTS)
Returns TRUE if premise entails conclusion (zero-shot NLI)
LLM-backed logical implication (escape hatch for IMPLIES)
Score how strongly evidence supports a claim (0-1)
Condense individual text into brief summary (scalar, per-row)
Finds common ground among texts via centrality + LLM summary
LLM-only consensus (escape hatch for CONSENSUS)
Combine multiple text values into one coherent output
Summarize a group of texts into one concise overview
Extracts URLs from text, fetches with browser, returns summary
Extract N main topics from texts (embed centrality + LLM naming)
LLM-only topic extraction (escape hatch for THEMES)
Returns 0.0-1.0 relevance score for text vs criterion (cross-encoder)
LLM-backed 0.0-1.0 relevance score (escape hatch for ABOUT/RELEVANCE TO)
Pick the single best value from a group by a plain-English quality criterion
Find unusual or atypical items via embeddings (+ optional criteria)
LLM-backed outlier detection (escape hatch for OUTLIERS)
PageRank centrality on an ad-hoc edge list (NetworkX)
Rank a group of items by a subjective multi-factor criterion
Find similar documents via vector search
Hybrid semantic + keyword search in Elasticsearch
Pure semantic search in Pinecone
Discover URLs on a website and return them as a table
Search the web and return results as a table (via Firecrawl)
Generate 768-dim embedding vector from text (on-box nomic-embed-text-v1.5)
Batch embed rows and store in lars_embeddings
Batch embed rows and store in Elasticsearch for hybrid search
Batch embed rows and store in Pinecone for vector search
Check embedding coverage for a table/column
Generate embedding and store with table/column/ID tracking
SigLIP 2 embedding for an image (L2-normalized, shared image/text space)
Bayesian A/B test (Beta-Binomial) returning full posterior + recommendation
Assess the confidence/quality of a cascade execution result.
Compare values in a group for similarities, differences, and patterns
Generate the strongest counterargument to a position
Differentially-private count (Laplace mechanism, sensitivity=1)
Differentially-private mean (Laplace mechanism)
Detect logical fallacies in an argument
Forecast a univariate time series via zero-shot Chronos-Bolt
Kaplan-Meier survival curve — time-to-event estimator with 95% CI
Returns the latest output from a cascade cell
Aggregate sentiment score (-1.0 to 1.0) for a collection of texts
Per-row sentiment score (-1.0 to 1.0) for a text value
Shortest path between two nodes on an ad-hoc edge list (NetworkX)
Construct the strongest version of an argument
Identify logical weaknesses and gaps in an argument
Remove or mask personally identifiable information from text
Type-cast messy real-world values that trip up standard CAST
Return the canonical/official form of a value (auto-detects entity type)
Extracts 4-digit year from messy text, returns -1 if undetermined
LLM-backed year extraction (escape hatch for CLEAN_YEAR)
Pick the best non-null value from a group (quality-aware COALESCE)
Fill in missing parts of a partial value using context
Correct factual/logical errors in a value using context (heavier sibling of FIX)
Provide context-aware defaults for null/empty values (smarter COALESCE)
Fill a null/empty value by inferring from context
Auto-fix common data-quality issues (typos, casing, formats)
Statistical imputation for missing values (distribution-aware)
Standardize a value to its canonical form for a given type
Parse currency to structured JSON (format normalization only)
LLM-backed currency parsing with approximate rates (escape hatch for NORMALIZE_CURRENCY)
Parse and normalize a quantity expression to a structured JSON
LLM-backed quantity parsing (escape hatch for NORMALIZE_QUANTITY)
Convert a value to a target unit via pint
LLM-backed unit conversion (escape hatch for NORMALIZE_UNIT)
Clear a session parameter previously set via PARAM_SET
Get a session parameter value
Set a session parameter value
Check whether a value matches the expected format for a type
Validate a value against a plain-English rule (more flexible sibling of VALID)
Deduplicates texts by semantic similarity (embeddings + threshold graph)
LLM-backed deduplication (escape hatch for DEDUPE)
Merge duplicate records into a composite golden record
Merge records with an explicit conflict-resolution strategy
Check if two values refer to the same entity (fuzzy equality)
Rewrite informal/messy text into a professional form
Translate text between languages (auto-detects source language)
Read query results or analysis aloud via TTS
Speech-to-text via Whisper large-v3-turbo
Apply any arbitrary prompt to text (ultimate flexibility)
Answer plain-English questions about your data by writing and running SQL
Return the SQL that ASK_DATA would run, without executing it
Call any registered LARS skill from SQL and receive results as a table
Generate a DuckDB SQL expression string from a plain-English description
Convert timeline data to Mermaid timeline visualization
Convert triples to Mermaid graph visualization
Render a chart specification to PNG image
Artistically stylize a chart image while preserving data
Generate a Plotly chart from data using LLM
Convert triples to relational node/edge graph tables for recursive CTE traversal
Generate a Vega-Lite chart from data using LLM
Apply theme styling to a chart specification
Analyze query results with LLM based on a prompt
Remove duplicate rows
Add LLM-computed columns to query results
Filter query results using LLM-based semantic matching
Group by column and aggregate another
Investigative analysis - explores related data to answer questions
Transform columns to rows (unpivot/wide-to-long)
Pass data through unchanged (no-op)
Transform rows to columns with smart pivot/cross-tabulation
Transform query results with inline Python
Compose multiple panels into a canvas layout for visualization
Take a random sample of rows
Call a skill and return JSON content directly (for json_extract_string)
Compute comprehensive column profiles for chart planning
Get top N rows by column value