Tuesday, May 10. 2011
Printer Friendly
Question: What is the difference between CURRENT_TIMESTAMP and clock_timestamp()
Answer: CURRENT_TIMESTAMP is an ANSI-SQL Standard variable you will find in many relational databases including PostgreSQL, SQL Server, Firebird, IBM DB2 and MySQL to name a few
that records the start of the transaction. The important thing to keep in mind about it is there is only one entry per transaction so if you have a long running transaction,
you won't be seeing it changing as you go along.
clock_timestamp() is a PostgreSQL function that always returns the current clock's timestamp. I don't think I'm alone in using it for doing simple benchmarking and other things
where for example I need to record the timings of each part of a function within the function using pedestrian RAISE NOTICE debug print statements.
There is another cool way I like using it, and that is for a batch of records each with an expensive function call, benchmarking how long it takes to process each record.
One of the things I'm working on is improving the speed of the tiger_geocoder packaged in PostGIS 2.0. The first root of attack seemed to me would be the normalize_address function
which I was noticing was taking anywhere from 10% to 50% of my time in the geocode process. That's a ton of time if you are trying to batch geocode a ton of records. The thing is
the function is very particular to how badly formed the address is so a whole batch could be held up by one bad apple and since the batch doesn't return until all are processed, it makes
the whole thing seem to take a while.
So rather than looping thru each, I thought it would be cool if I could run the batch, but for each record have it tell me how long it took to process relative to the rest so I could get
a sense of what a problem address looks like. So I wrote this query:
WITH ctbenchmark
AS
(SELECT *,
the_time - COALESCE(lag(the_time) OVER(ORDER BY the_time), CURRENT_TIMESTAMP) As process_time,
the_time - CURRENT_TIMESTAMP As diff_from_start
FROM (SELECT address_1, city, state, zip,
pprint_addy(normalize_address(coalesce(address_1,'') || ', ' || coalesce(city || ' ','') || state || ' ' || zip)) As pp_addr,
clock_timestamp() As the_time
FROM testgeocode LIMIT 1000) As foo )
SELECT *
FROM ctbenchmark
WHERE process_time > '00:00:00.016'::interval;
Which returned an output something like this:
address_1 | city | state | zip | pp_addr | the_time | process_time | diff_from_start
------------------+------------+-------+------- +-------------------------------------------+--------------+------------------
48 MAIN ST .. | S.. | MA | 021.. | 48 MAIN .. | 2011-05-10 03:24:43.078-04 | 00:00:00.032 | 00:00:00.032
15 ... | | MA | 018... | 15 GREN... | 2011-05-10 03:24:50.796-04 | 00:00:00.031 | 00:00:07.75
Friday, April 08. 2011
Printer Friendly
In a prior article Use of Out and InOut Parameters
we demonstrated how to use OUT parameters and INOUT parameters to return a set of records from a PostgreSQL function.
There is another approach to doing this, and that is to use the ANSI Standard RETURNS TABLE construct.
If you come from a SQL Server or IBM DB2 background, the RETURNS TABLE construct is probably most familiar, but still
how you use it and what is legal in it is a little different than it is in SQL Server or IBM DB2. We'll save the
contrast compare as a topic for another article.
In terms of performance between using OUT vs. RETURNS TABLE, we haven't noticed much of a difference. The main thing that is
nice about RETURNS TABLE is just that it's syntactically more pleasing in the sense that its clearer the structure of what you are returning.
In these next examples, we'll demonstrate similar examples we showed in the aforementioned article except using the
RETURNS TABLE .
Be warned that the RETURNS TABLE construct is only available for PostgreSQL 8.4+, while the OUT approach
has existed since PostgreSQL 8.1. With that said, if you need your code to work on 8.3 or lower, you can't use RETURNS TABLE.
When in doubt about a feature and you are creating code that needs to support earlier versions of PostgreSQL
(as we have to in the PostGIS development group),
or you want to get stubborn users off old antiquated versions of PostgreSQL and need a bit of ammunition
(as we have to (on PostGIS development including our own developers - and you know who you are :) ) )
check the
PostgreSQL feature matrix.
It will save you a lot of grief.
Continue reading "Using RETURNS TABLE vs. OUT parameters"
Friday, December 24. 2010
Printer Friendly
Question: You have a table of people and a table that specifies the activities each person is involved
in. You want to return a result that has one record per person and a column that has a listing of activities for each person
separated by semicolons and alphabetically sorted by activity. You also want the whole set alphabetically sorted by person's name.
This is a question we are always asked and since we mentor on various flavors of databases,
we need to be able to switch gears and provide an answer that works on the client's database. Most
often the additional requirement is that you can't install new functions in the database. This means that
for PostgreSQL/SQL Server that both support defining custom aggregates, that is out as an option.
Normally we try to come up with an answer that works in most databases, but sadly the only solution that works in
most is to push the problem off to the client front end and throw up your hands and proclaim -- "This ain't something that should be
done in the database and is a reporting problem." That is in fact what many database purists do, and all I can say to them is wake up and smell the coffee before you are out of a job.
We feel that data
transformation is an important function of a database, and if your database is incapable of massaging the data into a format
your various client apps can easily digest, WELL THAT's A PROBLEM.
We shall now document this answer rather than trying to answer for the nteenth time. For starter's
PostgreSQL has a lot of answers to this question, probably more so than any other, though some are easier to execute than others
and many depend on the version of PostgreSQL you are using. SQL Server has 2 classes of answers neither of which is terribly appealing,
but we'll go over the ones that don't require you to be able to install .NET stored functions in your database since we said that is often a requirement.
MySQL has a fairly
simple, elegant and very portable way that it has had for a really long time.
Continue reading "String Aggregation in PostgreSQL, SQL Server, and MySQL"
Monday, December 20. 2010
Printer Friendly
In Part 2 of PL/R we covered how to build PL/R functions that take arrays and output textual outputs of generated R objects. We then used this in an aggregate SQL query using array_agg. Often when you are building PL/R functions
you'll have R functions that you want to reuse many times either inside a single PL/R function or across various PL/R functions.
Unfortunately, if you wanted to call a PL/R function from another PL/R function, this is not possible unless you are doing it from spi.execute call.
There is another way to embed reusable R code in a PostgreSQL database.
In order to be able to share databases stored R code across various PL/R functions, PL/R has a feature called a plr_module. In this tutorial
we'll learn how to create and register shareable R functions with plr_module. In the next part of this series we'll start to explore generating graphs with PL/R.
Continue reading "PL/R Part 3: Sharing Functions across PL/R functions with plr_module"
Friday, December 10. 2010
Printer Friendly
In Intro to PL/R and R, we covered how to enable PL/R language in the database and wrote some PL/R functions
that rendered plain text reports using the R environment. What makes combining R and PostgreSQL in
PL/R most powerful is when you can start writing SQL summary queries that use R functions like any other SQL function.
In this next example, we'll be using PostGIS test runs from tests we autogenerated from the Official PostGIS documentation (Documentation Driven Testing (DDT))
as described in the Garden Test section of the PostGIS Developer wiki.
We've also updated some of our logging generator and test patterns so future results may not represent what we demonstrated in the last article.
On a side note: Among the changes in the tests was to introduce more variants of the Empty Geometry now supported by PostGIS 2.0.
Our beloved PostGIS 2.0 trunk is at the moment somewhat unstable when working with these new forms of emptiness and stuffing geometries in inappropriate places. At the moment it doesn't survive through the mindless machine gun battery of tests we have mercilessly inflicted.
It's been great fun trying to build a better dummy while watching Paul run around patching holes to make the software more dummy proof as the dummy stumbles across questionable but amusing PostGIS use cases not gracefully handled by his new serialization and empty logic.
On yet another side note, it's nice to
see that others are doing similar wonderful things with documentation. Check out Euler's comment on catalog tables
where he uses the PostgreSQL SGML documentation to autogenerate PostgreSQL catalog table comments using OpenJade's OSX to convert the SGML to XML and then XSL similar to what we did with PostGIS documentation to autogenerate PostGIS function/type comments and as a platform
for our test generator.
For our next exercises we'll be using the power of aggregation to push data into R instead of pg.spi.execute. This will make our functions far more reusable and versatile.
Continue reading "PL/R Part 2: Functions that take arguments and the power of aggregation"
Sunday, November 28. 2010
Printer Friendly
In this article we'll provide a summary of what PL/R is and how to get running with it. Since we don't like repeating ourselves,
we'll refer you to an article we wrote a while ago which is still fairly relevant today called Up and Running with PL/R (PLR) in PostgreSQL: An almost Idiot's Guide
and just fill in the parts that have changed. We should note that particular series was more geared toward the spatial database programmer (PostGIS in particular). There is a lot of overlap between the PL/R, R, and PostGIS user-base which is comprised
of many environmental scientists and researchers in need of powerful charting and stats tools to analyse their data who are high on the smart but low on the money human spectrum.
This series will be more of a general PL/R user perspective. We'll follow more of the same style we did with Quick Intro to PL/Python. We'll end our series with a PL/R cheatsheet similar to what we had for PL/Python.
As stated in our State of PostGIS article, we'll be using log files we generated from our PostGIS stress tests. These stress tests were auto-generated from the PostGIS official documentation.
The raster tests are comprised of 2,095 query executions exercising all the pixel types supported. The geometry/geograpy tests are comprised of 65,892 spatial SQL queries exercising every PostGIS geometry/geography supported in PostGIS 2.0 -- yes this includes TINS, Triangles,Polyhedral Surfaces, Curved geometries and all dimensions of them.
Most queries are unique. If you are curious to see what these log tables look like or want to follow along with these exercises, you can download the tables from here.
What is R and PL/R and why should you care?
R is both a language and an environment for doing statistics and generating graphs and plots. It is GNU-licensed and a common favorite of Universities and Research institutions. PL/R is a procedural language for PostgreSQL that allows you to write database stored functions
in R. R is a set-based and domain specific language similar to SQL except unlike the way relational databases treat data, it thinks of data as matrices, lists and vectors. I tend to think of it as a cross between LISP and SQL though more experienced Lisp and R users will probably disagree with me on that. This makes it easier in many cases to tabulate data both across columns as well as across rows.
The examples we will show in these exercises, could be done in SQL, but they are much more succinct to write in R. In addition to the language itself, there are a whole wealth of statistical and graphing functions available in R that you will not
find in any relational database. These functions are growing as more people contribute packages. Its packaging system called Comprehensive R Archive (CRAN) is similar in concept to Perl's CPAN and the in the works PGXN for PostgreSQL.
Continue reading "Quick Intro to R and PL/R - Part 1"
Friday, October 29. 2010
Printer Friendly
pgAdmin has this feature called a pgScript. Its a very simple scripting language for running
tasks in a pgAdmin SQL window. The documentation is PgScript manual.
Why would you use it over say writing a plpgsql function?
One main reason we use it is to run quick ad-hoc batch jobs such as geocoding addresses and so forth. The
benefit it has over running a stored function is that you don't have to run it as a single transaction.
This is important for certain kinds of tasks where you just want to run something in a loop and have each loop commit
separately. To us the syntax with the @ resembles SQL Server Transact-SQL more than it does any PostgreSQL language. WhenI first
saw pgScript I thought Wow PgAdmin talks Transact-SQL; now -- what will they think of next :).
Continue reading "pgAdmin pgScript"
Thursday, August 26. 2010
Printer Friendly
In part 1 of this series on PostgreSQL 9.0 planner outputs, we demonstrated how to render explain plans in YAML, JSON, and XML using the new explain features in PostgreSQL 9.0. In this second part,
we'll demonstrate how to build a user interface that allows you input a JSON formatted explain plan and have it render into a printable, navigateable display using JQuery, javascript and a little bit of HTML coding.
In part 3 we'll do something similar using XML and XSLT programming.
For those who aren't familiar with JQuery, it is an MIT licensed javascript library that is fairly light weight and allows you to inspect and change html elements with fairly intuitive syntax, has some nice ajax methods and tools for converting xml/json to native objects that can be manipulated.
You can check it out at JQUERY.
We are not experts in JQuery, but from what we have used of it, we really like it and the fact the base package is MIT licensed, fairly light weight and lots of plugins available for it are real pluses.
The most difficult thing I think most people find about reading explain plans is that they are upside down; it starts with a conclusion and backtracks how to arrive at it. Humans by nature think about planning steps from start to finish.
In order to make an explain plan understandable to mere mortals, we generally display them upside down or having the child-nodes shown left most. We shall follow that approach.
Continue reading "Explain Plans PostgreSQL 9.0 - Part 2: JSON and JQuery Plan Viewer"
Sunday, August 22. 2010
Printer Friendly
PostgreSQL offers several options for displaying and querying tree like structures.
In Using Recursive Common Table Expressions (CTE) to represent tree structures
we demonstrated how to use common table expressions to display a tree like structure. Common Table Expressions required PostgreSQL 8.4 and above but was fairly ANSI standards compliant. In addition to that
approach you have the option of using recursive functions. There is yet another common approach for this which is specific to PostgreSQL. This is using the ltree contrib datatype
that has been supported for sometime in PostgreSQL. For one of our recent projects, we chose ltree over the other approaches because the performance is much better when you need to do ad-hoc queries over the tree since it can take advantage of btree and gist indexes
and also has built-in tree query expressions that make ad-hoc queries simpler to do; similar in concept to the tsearch query syntax for querying text.
In this article we'll demonstrate how to use ltree and along the way also show the PostgreSQL 9.0 new features conditional triggers and ordered aggregates.
Continue reading "Using LTree to Represent and Query Hierarchy and Tree Structures"
Wednesday, July 21. 2010
Printer Friendly
In an earlier article Where is Soundex and other Fuzzy string things we covered the PostgreSQL contrib module fuzzstrmatch which contains the very popular function
soundex that is found in other popular relational databases. We also covered the more powerful levenshtein distance, metaphone and
dmetaphone functions included in fuzzstrmatch, but rarely found in other relational databases.
As far as fuzzy string matching goes, PostgreSQL has other functions up its sleeves. This time we will cover
the contrib module pg_trgm which was introduced in PostgreSQL 8.3. pgtrgm uses a concept called trigrams for doing string comparisons. The pg_trgm module has several functions and gist/gin operators.
Like other contrib modules, you just need to run the /share/contrib/pg_trgm.sql file packaged in your PostgreSQL install to enable it in your database.
For this set of exercises, we'll use trigrams to compare words using the same set of data we tested
with soundex and metaphones. For the next set of exercises, we will be using the places dataset we created in Importing Fixed width data into PostgreSQL with just PSQL.
The most useful are the similarity function and the
% operator. The % operator allows for using a GIST/GIN index and the similarity function allows for narrowing your filter similar to what
levenshtein did for us in fuzzstrmatch.
Continue reading "Fuzzy string matching with Trigram and Trigraphs"
Wednesday, June 02. 2010
Printer Friendly
One of the coolest features of PostgreSQL is the ability to write functions using plain old
SQL. This feature it has had for a long time. Even before PostgreSQL 8.2. No other database to our knowledge has this feature. By SQL we mean sans procedural mumbo jumbo like
loops and what not. This is cool for two reasons:
- Plain old SQL is the simplest to write and most anyone can write one and is just what the doctor ordered in many cases. PostgreSQL even allows you to write
aggregate functions with plain old SQL. Try to write an aggregate function in SQL Server
you've got to pull out your Visual Studio this and that and do some compiling and loading and you better know C# or VB.NET. Try in MySQL and you better learn C.
Do the same in PostgreSQL (you have a large choice of languages including SQL) and the code is simple to write. Nevermind
with MySQL and SQL Server, you aren't even allowed to do those type of things on a shared server or a server where the IT department is paranoid. The closest
with this much ease would be Oracle, which is unnecessarily verbose.
- Most importantly -- since it is just SQL, for simple user-defined functions, a PostgreSQL sql function can often be in-lined into the overall query plan since
it only uses what is legal in plain old SQL.
This inlining feature is part of the secret sauce that makes PostGIS fast and easy to use.
So instead of writing geom1 && geom2 AND Intersects(geom1,geom2) -- a user can write
ST_Intersects(geom1,geom2) . The short-hand is even more striking when you think of the ST_DWithin function.
With an inlined function, the planner has visibility into the function and breaks apart the
spatial index short-circuit test && from the more exhaustive absolute test Intersects(geom1,geom2)
and has great flexibility in reordering the clauses in the plan.
Continue reading "STRICT on SQL Function Breaks In-lining Gotcha"
Saturday, January 09. 2010
Printer Friendly
UPDATE: Thanks all for the suggestions. For now we ended up increasing the
seq_page_cost from 1 to 2 in the database. That has gotten us back to our old much much faster speeds without change in code and seems to have
improved the speeds of other queries as well, without reducing speed of any.
ALTER DATABASE mydb SET seq_page_cost=2;
As Jeff suggested, we'll try to come up with a standalone example that exhibits the behavior. The below example was more to demonstrate the construct. Table names and fields were changed to protect the innocent so that is why we didn't bother showing explain plans. The behavior also seems to do
with the distribution of data and gets worse when stats are updated (via vacuum analyze). Didn't see this in PostgreSQL 8.3 and this was a system recently upgraded from 8.3 to 8.4
---ORIGINAL ARTICLE HERE --
This is a very odd thing and I think has happened to us perhaps once before.
Its a bit puzzling, and we aren't particularly happy with our work around because its
something that looks to a casual observer as a bit bizarre. The hack is setting the enable_seqscan setting
off for a particular query to force the planner to use indexes available to it.
What is particularly troubling about this problem, is that it wasn't always this way.
This is a piece of query code we've had in an application for a while, and its worked shall
I say really fast. Response times in 300 ms - 1 sec, for what is not a trivial query against a not
so trivially sized hierarchy of tables.
Anyrate, one day -- this query that we were very happy with, suddenly started
hanging taking 5 minutes to run. Sure data had been added and so forth, but that didn't
completely explain this sudden change of behavior. The plan it had taken had changed drastically.
It just suddenly decided to stop using a critical index it had always used. Well it was still using it but just on
the root table, not the children. Though querying a child directly proved that it still refused to use it,
so it didn't seem to be the hierarchy at fault here.
Continue reading "Forcing the planner's hand with set enable_seqscan off WTF"
Friday, January 01. 2010
Printer Friendly
UPDATE: Since this article was written, Yum now comes packaged with PostGIS 1.5. Please refer to PostGIS Yum Install for PostgreSQL 9.0 if you want to go with a faster more idiot proof install process
A couple of people have asked this, so thought we would address the common issues people run into
with compiling PostGIS.
The first question, some will ask is Why don't you just take the packaged PostGIS that comes with the PostgreSQL
Yum repository?
There are two reasons for not installing PostGIS under using Yum and our general reasons for not.
- We like running a newer GEOS than what is generally packaged with Yum because certain PostGIS functionality
is not enabled with older versions of GEOS. For example GEOS 3.1 introduced a couple of new functions like ST_Covers and
ST_CoveredBy, ST_ContainedProperly
the Cascaded Union and prepared geometry speed enhancements require GEOS 3.1+. GEOS 3.2 brought more robust support for dealing with topological
exceptions, new buffering features like one-sided buffer (which the upcoming PostGIS 1.5 exposes if and only if you are running
GEOS 3.2+), faster buffering, and several memory leak cleanups. We highly recommend using GEOS 3.2.0. It is just a more robust product than prior GEOS versions.
- We generally like to run newer versions of PostGIS than what Yum provides and also run multiple versions of PostGIS
on the same server. This is a bit tricky to do with Yum since it only exposes one and generally an older one than what we
care for. PostGIS 1.5 will be especially great since it will have the geodetic support and cool functions like ST_ClosestPoint
and other major goodies.
NOTE: PostGIS 1.3, PostGIS 1.4, PostGIS 1.5 can coexist on the same PostgreSQL daemon service as long as they are installed in different databases. They will all share the same Proj and GEOS. So installing PostGIS 1.5 will not break your
PostGIS 1.4 or 1.3 installs. The newer GEOS 3.2.0 C-API is backward compatible with older GEOS C-API and the C-API is what PostGIS uses.
Continue reading "Compiling PostGIS 1.5 and installing after Yum PostgreSQL Install"
Monday, September 28. 2009
Printer Friendly
This is along the lines of more stupid window function fun and how many ways can we abuse this technology in PostgreSQL. Well actually we were using this approach to allocate geographic areas such that each area has approximately the same population
of things. So you can imagine densely populated areas would have smaller regions and more of them and less dense areas will have larger regions but fewer of them (kind of like US Census tracts).
So you have to think about ways of allocating your regions so you don't have a multipolygon where one part is in one part of the world and the other in another etc. Using window aggregation is one approach in conjunction with spatial sorting algorithms.
The non-spatial equivalent of this problem is how do you shove people in an elevator and ensure you don't exceed the capacity of the elevator for each ride. Below is a somewhat naive way of doing this.
The idea being you keep on summing the weights until you reach capacity and then start a new grouping.
Continue reading "Allocating People into Groups with Window aggregation"
Sunday, August 16. 2009
Printer Friendly
A very long time ago, we wrote an article on how to use PostgreSQL to show the fully qualified name of an item in an inventory tree.
Basically we were modeling a paper products tree. The original article can be found here Using PostgreSQL User-Defined Functions to solve the Tree Problem and was based on PostgreSQL 7.4 technology.
We'll repeat the text here for completeness and demonstrate the PostgreSQL 8.4 that solves this and more efficiently.
Continue reading "Using Recursive Common table expressions to represent Tree structures"
|