Starting with DuckDB 1.5, you can create Iceberg tables on regular S3 by specifying location, not just S3 Tables.
LOAD aws;
LOAD httpfs;
LOAD iceberg;
CREATE SECRET s3_secret (
TYPE s3,
PROVIDER credential_chain,
CHAIN 'config',
PROFILE 'your-profile'
);
ATTACH '123456789012' AS glue_catalog (
TYPE iceberg,
ENDPOINT_TYPE glue,
SECRET s3_secret
);
CREATE TABLE glue_catalog.default.iceberg_test (
id INTEGER,
name VARCHAR,
created_at TIMESTAMP
) WITH (
'format-version' = '2',
'location' = 's3://your-bucket/iceberg-test/'
);
SELECT, INSERT, UPDATE, and DELETE work as usual.
INSERT INTO glue_catalog.default.iceberg_test VALUES
(1, 'Alice', '2026-03-17 22:00:00'),
(2, 'Bob', '2026-03-17 22:01:00'),
(3, 'Charlie', '2026-03-17 22:02:00');
UPDATE glue_catalog.default.iceberg_test SET name = 'Alice Updated' WHERE id = 1;
DELETE FROM glue_catalog.default.iceberg_test WHERE id = 3;
SELECT * FROM glue_catalog.default.iceberg_test;
Running EXPLAIN ANALYZE on SELECT SUM(value) against a 100,000-row table showed Total Time of 576ms, with ICEBERG_SCAN reading a single Parquet file taking about 180ms. Running SELECT AVG(value) next reduced it to #GET: 1 and Total Time 128ms due to External File Cache, indicating that most of the time is spent on network communication.
┌─────────────────────────────────────┐
│┌───────────────────────────────────┐│
││ HTTPFS HTTP Stats ││
││ in: 1.3 MiB ││
││ #GET: 5 ││
│└───────────────────────────────────┘│
└─────────────────────────────────────┘
┌────────────────────────────────────────────────┐
│┌──────────────────────────────────────────────┐│
││ Total Time: 0.576s ││
│└──────────────────────────────────────────────┘│
└────────────────────────────────────────────────┘
┌───────────────────────────┐
│ UNGROUPED_AGGREGATE │
│ 1 row │
│ 0.00s │
└─────────────┬─────────────┘
┌─────────────┴─────────────┐
│ PROJECTION │
│ 100,000 rows │
│ 0.00s │
└─────────────┬─────────────┘
┌─────────────┴─────────────┐
│ TABLE_SCAN │
│ Function: │
│ ICEBERG_SCAN │
│ Projections: value │
│ Total Files Read: 1 │
│ 100,000 rows │
│ 0.18s │
└───────────────────────────┘