Write, Run & Share DuckDB queries online using OneCompiler's DuckDB online editor for free. It's one of the robust, feature-rich online editors for DuckDB. Getting started with the OneCompiler's DuckDB editor is really simple and pretty fast. The editor shows sample boilerplate code when you choose language as 'DuckDB' and start writing queries to learn and test online without worrying about tedious process of installation.
DuckDB is an open-source, in-process analytical database management system designed for fast analytical queries. Often called "SQLite for analytics," it combines the simplicity of SQLite with the analytical power of columnar databases. DuckDB supports modern SQL features including window functions, CTEs, and complex aggregations, making it perfect for data analysis and OLAP workloads.
Creates a new table with specified columns and data types. DuckDB supports a wide range of data types including INTEGER, VARCHAR, DOUBLE, BOOLEAN, DATE, TIMESTAMP, and complex types like arrays and structs. Tables are stored in a columnar format optimized for analytical queries, which makes aggregations and scans extremely fast.
CREATE TABLE EMPLOYEE (
empId INTEGER PRIMARY KEY,
name VARCHAR NOT NULL,
dept VARCHAR,
salary DOUBLE,
hire_date DATE
);
Adds new rows to a table. DuckDB supports inserting single rows, multiple rows in one statement, or inserting from query results. Multiple row inserts are more efficient as they reduce overhead. DuckDB also supports INSERT OR REPLACE for upsert operations and INSERT OR IGNORE to skip duplicate key violations.
INSERT INTO EMPLOYEE VALUES (1, 'Clark', 'Sales', 50000, '2023-01-15');
INSERT INTO EMPLOYEE VALUES (2, 'Dave', 'Accounting', 45000, '2023-03-20');
INSERT INTO EMPLOYEE VALUES (3, 'Ava', 'Sales', 55000, '2023-02-10');
-- Insert multiple rows
INSERT INTO EMPLOYEE VALUES
(4, 'Bob', 'Engineering', 60000, '2023-04-01'),
(5, 'Eve', 'HR', 48000, '2023-05-15');
Retrieves data from tables with powerful filtering, sorting, and limiting capabilities. DuckDB is optimized for analytical SELECT queries and can process millions of rows efficiently due to its columnar storage and vectorized execution engine. Use LIMIT and OFFSET for pagination, and combine with ORDER BY for consistent results.
-- Select all
SELECT * FROM EMPLOYEE;
-- Select with condition
SELECT * FROM EMPLOYEE WHERE dept = 'Sales';
-- Select specific columns with ordering
SELECT name, salary FROM EMPLOYEE ORDER BY salary DESC;
-- Limit and offset
SELECT * FROM EMPLOYEE LIMIT 10 OFFSET 5;
-- Select with multiple conditions
SELECT * FROM EMPLOYEE WHERE dept = 'Sales' AND salary > 50000;
Modifies existing rows in a table. Use a WHERE clause to specify which rows to update - without it, all rows will be modified. DuckDB supports updating multiple columns in a single statement and can use subqueries to compute new values. Updates in DuckDB are efficient due to its copy-on-write architecture.
-- Update single column
UPDATE EMPLOYEE SET dept = 'Marketing' WHERE empId = 1;
-- Update multiple columns
UPDATE EMPLOYEE SET dept = 'Sales', salary = 52000 WHERE empId = 2;
-- Update with calculation
UPDATE EMPLOYEE SET salary = salary * 1.10 WHERE dept = 'Engineering';
Removes rows from a table based on conditions. Always use a WHERE clause to avoid deleting all data. For removing all rows while keeping the table structure, TRUNCATE is more efficient. DuckDB handles deletes efficiently using its columnar storage format, marking rows as deleted without immediate physical removal.
-- Delete specific rows
DELETE FROM EMPLOYEE WHERE empId = 1;
-- Delete with condition
DELETE FROM EMPLOYEE WHERE dept = 'Sales' AND salary < 45000;
-- Delete all rows
TRUNCATE TABLE EMPLOYEE;
DuckDB excels at analytical aggregations with functions like COUNT, SUM, AVG, MIN, MAX, and many more. Use GROUP BY to aggregate by categories and HAVING to filter grouped results. DuckDB's columnar storage makes these operations extremely fast - it only reads the columns needed and uses vectorized processing for maximum throughput.
CREATE TABLE sales (
product VARCHAR,
amount DOUBLE,
quantity INTEGER
);
INSERT INTO sales VALUES ('Laptop', 999.99, 2), ('Mouse', 29.99, 10),
('Laptop', 999.99, 1), ('Keyboard', 79.99, 5);
-- Group by with aggregates
SELECT product,
COUNT(*) as orders,
SUM(quantity) as total_qty,
ROUND(SUM(amount * quantity), 2) as revenue
FROM sales
GROUP BY product
ORDER BY revenue DESC;
-- Having clause
SELECT product, SUM(quantity) as total_qty
FROM sales
GROUP BY product
HAVING SUM(quantity) > 3;
Window functions perform calculations across rows related to the current row without collapsing results like GROUP BY. DuckDB has excellent support for window functions including ROW_NUMBER, RANK, DENSE_RANK, LAG, LEAD, and aggregate functions with OVER clause. Use PARTITION BY to define groups and ORDER BY to define row ordering within each partition.
CREATE TABLE emp (name VARCHAR, dept VARCHAR, salary INTEGER);
INSERT INTO emp VALUES ('Alice', 'Sales', 50000), ('Bob', 'Sales', 60000),
('Charlie', 'Eng', 70000), ('Diana', 'Eng', 65000);
SELECT name, dept, salary,
ROW_NUMBER() OVER (PARTITION BY dept ORDER BY salary DESC) as rank_in_dept,
SUM(salary) OVER (PARTITION BY dept) as dept_total,
salary - AVG(salary) OVER (PARTITION BY dept) as diff_from_avg
FROM emp;
-- Running total
SELECT name, salary,
SUM(salary) OVER (ORDER BY name) as running_total
FROM emp;
CTEs provide a way to write modular, readable queries by defining temporary named result sets. Use the WITH clause to define one or more CTEs before your main query. CTEs can reference each other and the main query can reference any CTE. DuckDB also supports recursive CTEs for hierarchical data traversal.
WITH dept_stats AS (
SELECT dept,
COUNT(*) as emp_count,
AVG(salary) as avg_salary
FROM EMPLOYEE
GROUP BY dept
),
high_paying AS (
SELECT * FROM dept_stats WHERE avg_salary > 50000
)
SELECT * FROM high_paying ORDER BY avg_salary DESC;
-- Multiple CTEs
WITH sales_2023 AS (
SELECT * FROM sales WHERE year = 2023
),
monthly_totals AS (
SELECT month, SUM(amount) as total FROM sales_2023 GROUP BY month
)
SELECT * FROM monthly_totals;
DuckDB supports list (array) types for storing multiple values in a single column. Use square brackets to create lists and the UNNEST function to expand list elements into separate rows. This is useful for denormalized data, tags, or any multi-valued attributes. DuckDB also provides list functions like list_aggregate, list_filter, and list_transform.
CREATE TABLE users (
id INTEGER,
name VARCHAR,
tags VARCHAR[]
);
INSERT INTO users VALUES (1, 'Alice', ['developer', 'python', 'sql']);
INSERT INTO users VALUES (2, 'Bob', ['analyst', 'excel']);
-- Query with lists
SELECT * FROM users;
-- Unnest to expand lists into rows
SELECT id, name, UNNEST(tags) as tag FROM users;
-- List functions
SELECT name, LENGTH(tags) as tag_count, tags[1] as first_tag FROM users;
Combines rows from multiple tables based on related columns. DuckDB supports INNER, LEFT, RIGHT, FULL OUTER, and CROSS joins. Join performance is optimized through hash joins and the query optimizer automatically chooses the best join strategy. DuckDB also supports NATURAL joins and joins with USING clause for cleaner syntax when column names match.
CREATE TABLE departments (dept_id INTEGER, dept_name VARCHAR);
CREATE TABLE employees (emp_id INTEGER, name VARCHAR, dept_id INTEGER);
INSERT INTO departments VALUES (1, 'Sales'), (2, 'Engineering');
INSERT INTO employees VALUES (1, 'Alice', 1), (2, 'Bob', 2), (3, 'Charlie', 1);
-- Inner join
SELECT e.name, d.dept_name
FROM employees e
INNER JOIN departments d ON e.dept_id = d.dept_id;
-- Left join
SELECT e.name, d.dept_name
FROM employees e
LEFT JOIN departments d ON e.dept_id = d.dept_id;
DuckDB provides comprehensive string functions for text manipulation. Common functions include LENGTH, UPPER, LOWER, SUBSTRING, CONCAT, REPLACE, and TRIM. DuckDB also supports LIKE and ILIKE for pattern matching, and regular expressions with regexp_matches and regexp_replace for advanced text processing.
SELECT
UPPER('hello') AS uppercase,
LOWER('HELLO') AS lowercase,
LENGTH('hello') AS len,
CONCAT('Hello', ' ', 'World') AS combined,
SUBSTRING('Hello World', 1, 5) AS sub,
REPLACE('Hello World', 'World', 'DuckDB') AS replaced,
TRIM(' hello ') AS trimmed;
-- Pattern matching
SELECT * FROM EMPLOYEE WHERE name LIKE 'A%';
SELECT * FROM EMPLOYEE WHERE name ILIKE 'a%'; -- case insensitive
DuckDB has extensive date and time support with functions for extraction, arithmetic, and formatting. Use CURRENT_DATE and CURRENT_TIMESTAMP for current values, DATE_PART or EXTRACT to get components, and date arithmetic with INTERVAL. DuckDB can also parse dates from strings and format dates for display.
SELECT
CURRENT_DATE AS today,
CURRENT_TIMESTAMP AS now,
CURRENT_DATE + INTERVAL 7 DAY AS next_week,
CURRENT_DATE - INTERVAL 1 MONTH AS last_month,
DATE_PART('year', CURRENT_DATE) AS year,
DATE_PART('month', CURRENT_DATE) AS month,
DAYNAME(CURRENT_DATE) AS day_name;
-- Date difference
SELECT DATE_DIFF('day', DATE '2024-01-01', CURRENT_DATE) AS days_since;
CASE provides conditional logic in queries, similar to if-else statements. Use CASE to create computed columns, categorize data, or handle different conditions. DuckDB supports both simple CASE (comparing one value) and searched CASE (evaluating multiple conditions). CASE can be used in SELECT, WHERE, ORDER BY, and other clauses.
SELECT name, salary,
CASE
WHEN salary >= 60000 THEN 'Senior'
WHEN salary >= 45000 THEN 'Mid-Level'
ELSE 'Junior'
END AS level
FROM EMPLOYEE;
-- Simple CASE
SELECT name,
CASE dept
WHEN 'Engineering' THEN 'Tech'
WHEN 'Sales' THEN 'Revenue'
ELSE 'Other'
END AS category
FROM EMPLOYEE;
ALTER TABLE modifies existing table structure by adding, removing, or renaming columns. DROP TABLE removes a table and all its data permanently. DuckDB supports IF EXISTS to prevent errors when dropping non-existent tables. Schema changes in DuckDB are efficient and don't require rewriting the entire table for most operations.
-- Add column
ALTER TABLE EMPLOYEE ADD COLUMN email VARCHAR;
-- Rename column
ALTER TABLE EMPLOYEE RENAME COLUMN email TO work_email;
-- Drop column
ALTER TABLE EMPLOYEE DROP COLUMN work_email;
-- Drop table
DROP TABLE IF EXISTS EMPLOYEE;