From: | John Hsu <chenhao(dot)john(dot)hsu(at)gmail(dot)com> |
---|---|
To: | pgsql-hackers(at)lists(dot)postgresql(dot)org |
Cc: | John Hsu <chenhao(dot)john(dot)hsu(at)gmail(dot)com> |
Subject: | Re: Improve pg_dump dumping publication tables |
Date: | 2020-10-26 21:43:59 |
Message-ID: | 160374863905.1204.1019739784563924392.pgcf@coridan.postgresql.org |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Hi Cary,
Thanks for taking a look. I agree there's a risk since there's more memory usage on client side, and if you're dumping say millions of publicationTables then that could be problematic.
I'd like to think this isn't any riskier than existing pg_dump code such as in getTables(...) where presumably we would run into similar problems.
Cheers,
John H
From | Date | Subject | |
---|---|---|---|
Next Message | Fabrízio de Royes Mello | 2020-10-26 22:08:14 | Add important info about ANALYZE after create Functional Index |
Previous Message | Peter Geoghegan | 2020-10-26 21:15:03 | Re: Deleting older versions in unique indexes to avoid page splits |