JSON Formatter for Elasticsearch Queries and Responses

Elasticsearch's Query DSL is one of the most powerful — and most complex — JSON-based query languages in use today. A multi-level bool query with nested must, should, filter, and must_not clauses, combined with aggregations and highlights, can produce a JSON document that runs to hundreds of lines even before you add the response. When something isn't returning the results you expect, the first step is always to read the query carefully — and that's nearly impossible when the query was built programmatically and output as a single compact string. This formatter is an essential tool for Elasticsearch developers: paste your query, mapping, index settings, or response body and immediately get a structured, indented view that makes the query logic readable. Use it to debug scoring issues, verify that filters are properly nested inside bool queries, and confirm that your mapping definitions use the correct field types.

Open JSON Formatter →

What Is JSON Formatter for Elasticsearch Queries and Responses?

Elasticsearch uses JSON for all operations: search queries, index mappings, cluster settings, and API responses. The Query DSL is a JSON-based query language with complex nesting for bool logic, aggregations, and pipelines. Formatting Elasticsearch JSON makes the query structure and logic readable, which is essential for debugging and optimization.

How to Use the JSON Formatter

  1. Step 1: Copy your Elasticsearch query body, mapping definition, or API response from your client, Kibana Dev Tools, or log output.
  2. Step 2: Paste it into the input area above.
  3. Step 3: Click 'Format' to expand the full query or response structure.
  4. Step 4: Identify the top-level sections: query, aggs, sort, _source, highlight, etc.
  5. Step 5: For bool queries, verify the correct placement of clauses: must, should, filter, must_not.
  6. Step 6: Copy the formatted query for use in Kibana Dev Tools, your application code, or documentation.

Example

{
  "query": {
    "bool": {
      "must": [
        { "match": { "title": "elasticsearch" } }
      ],
      "filter": [
        { "term": { "status": "published" } },
        { "range": { "publish_date": { "gte": "2025-01-01" } } }
      ],
      "must_not": [
        { "term": { "category": "deprecated" } }
      ]
    }
  },
  "aggs": {
    "by_category": { "terms": { "field": "category.keyword" } }
  },
  "size": 20
}

Pro Tips

Ready to Try It?

Free, browser-based, no signup required.

Launch JSON Formatter Free →

FAQ's

Yes. Java High Level REST Client, the Python elasticsearch-py library, and the JavaScript @elastic/elasticsearch client all produce standard JSON queries. Enable request logging in your client, copy the query body from the logs, and paste it here.

Must, should, filter, and must_not have distinct behaviors. Format your query to verify clause placement — a term filter accidentally placed in `must` instead of `filter` affects scoring. A `should` clause without `minimum_should_match` may not behave as expected.

Absolutely. The full Elasticsearch response — including metadata, _hits array, _source fields, and aggregation buckets — is valid JSON. Paste the complete response to see the structure clearly, which is especially useful for debugging aggregation output shapes.

In Kibana Dev Tools, click the wrench icon next to a query to copy it. Alternatively, write your query here, format it for readability, then paste it back into Dev Tools for execution. Dev Tools also has its own inline formatter, but this tool is useful for external sharing.

Yes — mapping definitions are JSON objects with `properties`, `type`, and format configurations. Formatting makes it easy to verify field types, identify missing keyword sub-fields for aggregations, and check dynamic mapping settings before applying the mapping to a production index.

The formatter treats Elasticsearch queries as standard JSON — it doesn't apply Elasticsearch-specific validation. It will format and validate the JSON structure, but it won't catch semantic errors like using an invalid aggregation type name or referencing a field that doesn't exist in your mapping.

Pipeline aggregations like `moving_avg` or `bucket_script` are nested within the `aggs` section just like regular aggregations. Paste the full query body including the `aggs` section, format it, and the pipeline aggregation's `buckets_path` and parameters will be clearly visible.