You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I've been trying to copy our prod supabase database down to our preview supabase instance and I'm having a rough time with it. The backup works pretty well, but the restore fails more than 75% of the time with either a timeout or what looks like the postgres connection process failing/exiting and a disconnect. We've just upgraded from medium to large instance today and are still seeing it.
We have a handful of tables that are large-ish (200MB-600MB), mostly large jsonb columns. In the grand scheme of things a 200MB db table feels small, just saying they're the largest tables we have.
When restoring I will either get a timeout when restoring one of these large tables, or what looks like the server went away.
Example of the server crashing or not responding after a long wait...
pg_restore: while PROCESSING TOC:
pg_restore: from TOC entry 4522; 0 6089072 TABLE DATA SomeTable postgres
pg_restore: error: COPY failed for table "SomeTable": server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.".
at throwUnhandledRejectionsMode (node:internal/process/promises:389:7)
at processPromiseRejections (node:internal/process/promises:470:17)
at process.processTicksAndRejections (node:internal/process/task_queues:96:32) {
code: 'ERR_UNHANDLED_REJECTION'
}
The interesting thing to me is that I can limit my local postgres server in docker to 1GB of RAM and the restores will work 100% of the time (I can even turn on concurrency with like -j 3 and it still works), but restoring to supabase on an instance with 8GB of RAM it's dying. I do see a CPU spike when restoring, but it spikes for 1 minute, then CPU drops to 0 while I wait 15+ minutes for my pg_restore command to report a timeout/failure.
My internet connection is gigabit and super-reliable, I don't think it's a local problem.
I also had a rough time trying to restore from the supabase-generated backups, and we don't run the postgres cli method of running postgres for our dev instances, just vanilla postgres in docker, and the supabase backups don't seem to restore to vanilla postgres very well at all.
I've opened a support ticket, this discussion is more to ask for what may have worked well for other people. So to that end...
Does anyone have any better experiences with DB-copy sort of processes for a DB around 3.5GB or larger? Are you using something other than pg_dump and pg_restore?
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
I've been trying to copy our prod supabase database down to our preview supabase instance and I'm having a rough time with it. The backup works pretty well, but the restore fails more than 75% of the time with either a timeout or what looks like the postgres connection process failing/exiting and a disconnect. We've just upgraded from medium to large instance today and are still seeing it.
Commands used:
We have a handful of tables that are large-ish (200MB-600MB), mostly large jsonb columns. In the grand scheme of things a 200MB db table feels small, just saying they're the largest tables we have.
When restoring I will either get a timeout when restoring one of these large tables, or what looks like the server went away.
Example of the server crashing or not responding after a long wait...
The interesting thing to me is that I can limit my local postgres server in docker to 1GB of RAM and the restores will work 100% of the time (I can even turn on concurrency with like
-j 3
and it still works), but restoring to supabase on an instance with 8GB of RAM it's dying. I do see a CPU spike when restoring, but it spikes for 1 minute, then CPU drops to 0 while I wait 15+ minutes for my pg_restore command to report a timeout/failure.My internet connection is gigabit and super-reliable, I don't think it's a local problem.
I also had a rough time trying to restore from the supabase-generated backups, and we don't run the postgres cli method of running postgres for our dev instances, just vanilla postgres in docker, and the supabase backups don't seem to restore to vanilla postgres very well at all.
I've opened a support ticket, this discussion is more to ask for what may have worked well for other people. So to that end...
Does anyone have any better experiences with DB-copy sort of processes for a DB around 3.5GB or larger? Are you using something other than pg_dump and pg_restore?
Beta Was this translation helpful? Give feedback.
All reactions