From: | "Andrey V(dot) Lepikhov" <a(dot)lepikhov(at)postgrespro(dot)ru> |
---|---|
To: | Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>, pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: POC: postgres_fdw insert batching |
Date: | 2020-07-10 04:28:44 |
Message-ID: | 0300d7df-854b-6f72-1e93-dfbf98f3fc8a@postgrespro.ru |
Views: | Whole Thread | Raw Message | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On 6/28/20 8:10 PM, Tomas Vondra wrote:
> Now, the primary reason why the performance degrades like this is that
> while FDW has batching for SELECT queries (i.e. we read larger chunks of
> data from the cursors), we don't have that for INSERTs (or other DML).
> Every time you insert a row, it has to go all the way down into the
> partition synchronously.
You added new fields into the PgFdwModifyState struct. Why you didn't
reused ResultRelInfo::ri_CopyMultiInsertBuffer field and
CopyMultiInsertBuffer machinery as storage for incoming tuples?
--
regards,
Andrey Lepikhov
Postgres Professional
From | Date | Subject | |
---|---|---|---|
Next Message | Thomas Munro | 2020-07-10 05:10:59 | Re: Support for NSS as a libpq TLS backend |
Previous Message | vignesh C | 2020-07-10 04:28:28 | Re: Added tab completion for the missing options in copy statement |