UPDATE: Thanks all for the suggestions. For now we ended up increasing the
seq_page_cost from 1 to 2 in the database. That has gotten us back to our old much much faster speeds without change in code and seems to have
improved the speeds of other queries as well, without reducing speed of any.
ALTER DATABASE mydb SET seq_page_cost=2;
As Jeff suggested, we'll try to come up with a standalone example that exhibits the behavior. The below example was more to demonstrate the construct. Table names and fields were changed to protect the innocent so that is why we didn't bother showing explain plans. The behavior also seems to do
with the distribution of data and gets worse when stats are updated (via vacuum analyze). Didn't see this in PostgreSQL 8.3 and this was a system recently upgraded from 8.3 to 8.4
---ORIGINAL ARTICLE HERE --
This is a very odd thing and I think has happened to us perhaps once before.
Its a bit puzzling, and we aren't particularly happy with our work around because its
something that looks to a casual observer as a bit bizarre. The hack is setting the enable_seqscan setting
off for a particular query to force the planner to use indexes available to it.
What is particularly troubling about this problem, is that it wasn't always this way.
This is a piece of query code we've had in an application for a while, and its worked shall
I say really fast. Response times in 300 ms - 1 sec, for what is not a trivial query against a not
so trivially sized hierarchy of tables.
Anyrate, one day -- this query that we were very happy with, suddenly started
hanging taking 5 minutes to run. Sure data had been added and so forth, but that didn't
completely explain this sudden change of behavior. The plan it had taken had changed drastically.
It just suddenly decided to stop using a critical index it had always used. Well it was still using it but just on
the root table, not the children. Though querying a child directly proved that it still refused to use it,
so it didn't seem to be the hierarchy at fault here.
UPDATE: Since this article was written, Yum now comes packaged with PostGIS 1.5. Please refer to PostGIS Yum Install for PostgreSQL 9.0 if you want to go with a faster more idiot proof install process
A couple of people have asked this, so thought we would address the common issues people run into
with compiling PostGIS.
The first question, some will ask is Why don't you just take the packaged PostGIS that comes with the PostgreSQL
There are two reasons for not installing PostGIS under using Yum and our general reasons for not.
We like running a newer GEOS than what is generally packaged with Yum because certain PostGIS functionality
is not enabled with older versions of GEOS. For example GEOS 3.1 introduced a couple of new functions like ST_Covers and
the Cascaded Union and prepared geometry speed enhancements require GEOS 3.1+. GEOS 3.2 brought more robust support for dealing with topological
exceptions, new buffering features like one-sided buffer (which the upcoming PostGIS 1.5 exposes if and only if you are running
GEOS 3.2+), faster buffering, and several memory leak cleanups. We highly recommend using GEOS 3.2.0. It is just a more robust product than prior GEOS versions.
We generally like to run newer versions of PostGIS than what Yum provides and also run multiple versions of PostGIS
on the same server. This is a bit tricky to do with Yum since it only exposes one and generally an older one than what we
care for. PostGIS 1.5 will be especially great since it will have the geodetic support and cool functions like ST_ClosestPoint
and other major goodies.
NOTE: PostGIS 1.3, PostGIS 1.4, PostGIS 1.5 can coexist on the same PostgreSQL daemon service as long as they are installed in different databases. They will all share the same Proj and GEOS. So installing PostGIS 1.5 will not break your
PostGIS 1.4 or 1.3 installs. The newer GEOS 3.2.0 C-API is backward compatible with older GEOS C-API and the C-API is what PostGIS uses.