Convert Parquet to JSON
Drop a .parquet file below and download JSON or NDJSON. Filter and sort before exporting — and your data never leaves the browser.
- 100% client-side
- JSON array or NDJSON
- Type-preserving
- Snappy / Zstd / Gzip supported
Open a .parquet file, then click Download JSON — or use the chevron to switch to NDJSON. The export honors the current filter, sort, and search.
Why convert Parquet to JSON?
Apache Parquet is a columnar binary format optimized for analytics (Spark, DuckDB, Pandas, Hugging Face). JSON is the lingua franca of web APIs, microservices, and developer tooling. Converting to JSON (or NDJSON) lets you feed Parquet data into REST endpoints, ingest it into Elasticsearch / OpenSearch / ClickHouse, pipe it into jq, or paste it directly into a request body during debugging.
Most online tools either require a Python environment or upload your data to their servers. Parqui converts entirely in your browser, with full support for compressed Parquet files and proper type preservation.
Frequently asked questions
Is the conversion done on my computer?
Yes. Parqui runs entirely client-side in your browser. Your file is never uploaded to a server — the .parquet bytes are read locally and the JSON is generated locally. Nothing leaves your machine.
What's the difference between JSON and NDJSON?
JSON produces a single pretty-printed array — convenient for human reading or APIs that expect a JSON body. NDJSON (newline-delimited JSON) produces one compact JSON object per line — ideal for streaming, log pipelines, BigQuery, ClickHouse, jq processing, and other line-oriented tooling.
Are there any file size limits?
There are no hard limits. Practical limits depend on browser memory. JSON output is typically 2–3× larger than the source Parquet because Parquet is columnar and compressed. For very large datasets, NDJSON is more memory-friendly to consume downstream.
How are types preserved?
Native JSON types are preserved: numbers stay numbers, booleans stay booleans, strings stay strings, nulls stay nulls. Dates and timestamps are serialized as ISO 8601 strings. BigInt values are serialized as decimal strings (since JSON doesn't have a native bigint type).
Does it work with compressed Parquet files (Snappy, Zstd, Gzip)?
Yes. All common Parquet compression codecs — Snappy, Gzip, Brotli, Zstd, LZ4 — are supported out of the box via hyparquet-compressors.
Can I export only filtered or sorted rows?
Yes. Apply a filter, sort, or full-text search in the toolbar — the JSON export honors the active pipeline and downloads exactly what you see, in the same row order.
Is it free? Do I need to sign up?
Yes, the online converter is free with no signup, no account, no email. Parqui's npm components require a paid license for commercial use, but all the web tools are free for everyone.
Can I run this conversion in Node.js?
Yes. The @parqui/core npm package exposes rowsToJSON() and the Parquet readers. The same conversion runs in Node.js, Bun, Deno, or any modern JavaScript runtime.