We use cookies to make your experience better.
Learn how to migrate data from Coder's built-in database.
By default, Coder deploys a built-in database in the installation's Kubernetes namespace. We recommend using this database only for evaluation purposes.
At the end of your evaluation period, you may need to migrate the data from the built-in database to an out-of-cluster PostgreSQL database for production use. This article will walk you through the process of doing so.
You must be a cluster admin for your Kubernetes cluster.
Azure database users: if you're using Azure Database for PostgreSQL, note that Coder only works with Single Server; the Flexible Server (Preview) and Hyperscale (Citus) options do not support the TimescaleDB extension required.
Access the database pod and dump the database into a file:
kubectl exec -it statefulset/timescale -n coder -- pg_dump -U coder -d coder > backup.sql
Optional: If your database is large, you can truncate Coder's telemetry, metrics, and audit log data to reduce the file size:
TRUNCATE metric_events;
TRUNCATE environment_stats;
TRUNCATE audit_logs;
Access your PostgreSQL instance and create user coder
and database coder
Import the data you exported in the first step into your external database:
psql -U coder < backup.sql
Connect your Coder instance to the database:
helm upgrade --reuse-values -n coder coder coder/coder \
--set postgres.default.enable=false \
--set postgres.host=<HOST_ADDRESS> \
--set postgres.port=<PORT_NUMBER> \
--set postgres.user=<DATABASE_USER> \
--set postgres.database=<DATABASE_NAME> \
--set postgres.passwordSecret=<secret-name> \
--set postgres.sslMode=require
Optional: If you'd like to delete the Timescale persistent volume, run:
kubectl delete pvc timescale-data-timescale-0 -n coder
At this point, you should be able to log in to your Coder deployment successfully.
See an opportunity to improve our docs? Make an edit.