Hi Anna,
The issue is because the server finishing its job, but the client like Python in PyCharm or DataGrip hanging while it waits to receive the results. On the server side, PostgreSQL switches to an “idle in transaction” state with a wait_event = ClientRead
. It means that PostgreSQL is done processing the query and is now waiting for the client to read the data or send the next command.
The problem only shows up when working with a large table (about 28 GB), and not with smaller subsets (like 17 GB). That points to the issue related to how large result sets are handled especially when sent over the network. If the client can’t keep up or doesn’t start reading the data quickly enough, the connection appears to be idle as the server simply waits.
You've already checked for common issues like locking or slow queries using tools like pg_locks
, pg_blocking_pids()
, and pg_stat_statements
, which is good. This confirms that the problem isn't with how the query runs on the server, but instead with how the client is handling the data after the query finishes.
Try with the PostgreSQL parameters like idle_in_transaction_session_timeout
to automatically close sessions that sit idle for too long in a transaction. It also helps to adjust TCP keep-alive settings (tcp_keepalives_idle
, tcp_keepalives_interval
, tcp_keepalives_count
) so that connections don’t silently hang when there’s no activity.
On the client side, it’s best to avoid queries like INSERT ... RETURNING *
on huge tables, reduce the fetch size (especially in tools like DataGrip), and always make sure you explicitly commit or close your transactions. If possible, try running the same query outside of WSL2 on native Windows or using Azure Cloud Shell—to see if the issue is tied to the WSL2 environment.