Query completes on Azure PostgreSQL but client keeps running (state = idle in transaction / ClientRead)

Anna 0 Reputation points
2025-06-11T09:34:33.4833333+00:00

Environment

  • Service: Azure Database for PostgreSQL – Flexible Server (General Purpose, D2ds_v5, 2 vCores, 8 GiB RAM, 1024 GiB storage)
  • Client OS: Windows 11 / WSL 2
  • PostgreSQL version: 15.12

Problem

I connect from PyCharm and run fairly simple DDL / DML statements against a large table (~28 GB). While the query is active:

  1. Portal metrics show CPU at 70–80 %.

After < 10 minutes the CPU drops to ~2 %, suggesting the server has finished its part.

In pg_stat_activity the backend transitions from activeidle in transaction with wait_event = ClientRead, then either sits there for a long time or disappears altogether.

  1. Meanwhile the Python process never returns (no error, just hangs); I eventually have to stop it manually.
  2. I reproduced the exact same pattern in DataGrip: the query hangs indefinitely, yet once CPU falls to ~2 % the new table already shows up in the schema browser, confirming the server work is done.

What I’ve ruled out

No blocking locks (pg_locks, pg_blocking_pids() are empty).

Nothing new appears in pg_stat_statements or any pg_stat_progress_* view.

The same query completes instantly on a smaller subset of the table (~17 GB).

  • The table has been vacuumed and analyzed (VACUUM ANALYZE).

Questions

  1. Why does the server report the session as idle / finished while the client still thinks it is fetching data?
  2. Could result-set size, TCP keep-alive settings, or Azure gateway time-outs be responsible?
  3. Which PostgreSQL or Flexible Server parameters might explain this behaviour?

Any insights or similar experiences would be greatly appreciated—especially ways to detect sooner that the backend has already sent all rows.

Azure Database for PostgreSQL
{count} votes

Accepted answer
  1. Narendra Pakkirigari 475 Reputation points Microsoft External Staff Moderator
    2025-06-13T12:35:11.4+00:00

    Hi Anna,

    The issue is because the server finishing its job, but the client like Python in PyCharm or DataGrip hanging while it waits to receive the results. On the server side, PostgreSQL switches to an “idle in transaction” state with a wait_event = ClientRead. It means that PostgreSQL is done processing the query and is now waiting for the client to read the data or send the next command.

    The problem only shows up when working with a large table (about 28 GB), and not with smaller subsets (like 17 GB). That points to the issue related to how large result sets are handled especially when sent over the network. If the client can’t keep up or doesn’t start reading the data quickly enough, the connection appears to be idle as the server simply waits.

    You've already checked for common issues like locking or slow queries using tools like pg_locks, pg_blocking_pids(), and pg_stat_statements, which is good. This confirms that the problem isn't with how the query runs on the server, but instead with how the client is handling the data after the query finishes.

    Try with the PostgreSQL parameters like idle_in_transaction_session_timeout to automatically close sessions that sit idle for too long in a transaction. It also helps to adjust TCP keep-alive settings (tcp_keepalives_idle, tcp_keepalives_interval, tcp_keepalives_count) so that connections don’t silently hang when there’s no activity.

    On the client side, it’s best to avoid queries like INSERT ... RETURNING * on huge tables, reduce the fetch size (especially in tools like DataGrip), and always make sure you explicitly commit or close your transactions. If possible, try running the same query outside of WSL2 on native Windows or using Azure Cloud Shell—to see if the issue is tied to the WSL2 environment.

    1 person found this answer helpful.
    0 comments No comments

0 additional answers

Sort by: Newest

Your answer

Answers can be marked as Accepted Answers by the question author, which helps users to know the answer solved the author's problem.