Tuesday, May 10. 2011
Printer Friendly
Question: What is the difference between CURRENT_TIMESTAMP and clock_timestamp()
Answer: CURRENT_TIMESTAMP is an ANSI-SQL Standard variable you will find in many relational databases including PostgreSQL, SQL Server, Firebird, IBM DB2 and MySQL to name a few
that records the start of the transaction. The important thing to keep in mind about it is there is only one entry per transaction so if you have a long running transaction,
you won't be seeing it changing as you go along.
clock_timestamp() is a PostgreSQL function that always returns the current clock's timestamp. I don't think I'm alone in using it for doing simple benchmarking and other things
where for example I need to record the timings of each part of a function within the function using pedestrian RAISE NOTICE debug print statements.
There is another cool way I like using it, and that is for a batch of records each with an expensive function call, benchmarking how long it takes to process each record.
One of the things I'm working on is improving the speed of the tiger_geocoder packaged in PostGIS 2.0. The first root of attack seemed to me would be the normalize_address function
which I was noticing was taking anywhere from 10% to 50% of my time in the geocode process. That's a ton of time if you are trying to batch geocode a ton of records. The thing is
the function is very particular to how badly formed the address is so a whole batch could be held up by one bad apple and since the batch doesn't return until all are processed, it makes
the whole thing seem to take a while.
So rather than looping thru each, I thought it would be cool if I could run the batch, but for each record have it tell me how long it took to process relative to the rest so I could get
a sense of what a problem address looks like. So I wrote this query:
WITH ctbenchmark
AS
(SELECT *,
the_time - COALESCE(lag(the_time) OVER(ORDER BY the_time), CURRENT_TIMESTAMP) As process_time,
the_time - CURRENT_TIMESTAMP As diff_from_start
FROM (SELECT address_1, city, state, zip,
pprint_addy(normalize_address(coalesce(address_1,'') || ', ' || coalesce(city || ' ','') || state || ' ' || zip)) As pp_addr,
clock_timestamp() As the_time
FROM testgeocode LIMIT 1000) As foo )
SELECT *
FROM ctbenchmark
WHERE process_time > '00:00:00.016'::interval;
Which returned an output something like this:
address_1 | city | state | zip | pp_addr | the_time | process_time | diff_from_start
------------------+------------+-------+------- +-------------------------------------------+--------------+------------------
48 MAIN ST .. | S.. | MA | 021.. | 48 MAIN .. | 2011-05-10 03:24:43.078-04 | 00:00:00.032 | 00:00:00.032
15 ... | | MA | 018... | 15 GREN... | 2011-05-10 03:24:50.796-04 | 00:00:00.031 | 00:00:07.75
Thursday, April 21. 2011
Printer Friendly
In the past I have always chosen to compile my own PostGIS because the GEOS library that came with the regular PostgreSQL yum install, was a bit antiquated.
This has changed, so this time around I figured I'd give it a go at using the Yum repository 1.5.2 release of PostGIS available via Yum Package List.
Before you can follow these directions, make sure you have your PostgreSQL 9.0 setup via our An almost idiot's guide to Install PostgreSQL 9.0 with Yum.
PostGIS in Action has started shipping from Amazon and we already have 3 positive reviews. We are hoping to write another book sometime soon, but haven't decided yet on the topic. Will definitely have something to do with databases and probably a lot of PostgreSQL in it.
Continue reading "An Almost Idiot's Guide to Installing PostGIS 1.5 on PostgreSQL 9.0 via Yum"
Monday, April 18. 2011
Printer Friendly
In this exercise, we'll go thru installing PostgreSQL 9.0 on a Cent OS 5.5 32-bit box. This I'm testing on a GoGrid Cloud server so I can do parallel benchmarks between my windows GoGrid
and Linux GoGrid server.
Upgrading from PostgreSQL 8.* to PostgreSQL 9.0
If you are upgrading from a PostgreSQL 8.4 to PostgreSQL 9.0, please refer to Devrim's article:
Upgrading from 8.4 to 9.0 on Fedora / Red Hat / CentOS using RPMs.
For the rest of this article, we'll go over configuring your yum to use the PostgreSQL PGDG Yum repository managed by Devrim Gunduz, which has the latest and greatest of
9.0 as well as the 9.1 latest development release. We'll also demonstrate how to have two instances of PostgreSQL running so you can experiment with the new features of
PostgreSQL 9.1 while reminiscing about the soon to be old-hat features of PostgreSQL 9.0.
Continue reading "An almost idiot's guide to Install PostgreSQL 9.0 with Yum"
Friday, April 08. 2011
Printer Friendly
In a prior article Use of Out and InOut Parameters
we demonstrated how to use OUT parameters and INOUT parameters to return a set of records from a PostgreSQL function.
There is another approach to doing this, and that is to use the ANSI Standard RETURNS TABLE construct.
If you come from a SQL Server or IBM DB2 background, the RETURNS TABLE construct is probably most familiar, but still
how you use it and what is legal in it is a little different than it is in SQL Server or IBM DB2. We'll save the
contrast compare as a topic for another article.
In terms of performance between using OUT vs. RETURNS TABLE, we haven't noticed much of a difference. The main thing that is
nice about RETURNS TABLE is just that it's syntactically more pleasing in the sense that its clearer the structure of what you are returning.
In these next examples, we'll demonstrate similar examples we showed in the aforementioned article except using the
RETURNS TABLE .
Be warned that the RETURNS TABLE construct is only available for PostgreSQL 8.4+, while the OUT approach
has existed since PostgreSQL 8.1. With that said, if you need your code to work on 8.3 or lower, you can't use RETURNS TABLE.
When in doubt about a feature and you are creating code that needs to support earlier versions of PostgreSQL
(as we have to in the PostGIS development group),
or you want to get stubborn users off old antiquated versions of PostgreSQL and need a bit of ammunition
(as we have to (on PostGIS development including our own developers - and you know who you are :) ) )
check the
PostgreSQL feature matrix.
It will save you a lot of grief.
Continue reading "Using RETURNS TABLE vs. OUT parameters"
Wednesday, March 30. 2011
Printer Friendly
I am happy to report, that the final proof of the PostGIS in Action E-Book got released today
and the printed version is scheduled for release Aprill 11th, 2011 and should be available on Amazon and other locations around then. The other e-Reader formats will come after that.
You can buy from here or download the two free chapters, if you haven't already.
Each hard-copy purchase comes with a free E-Book version. There is a coupon in the back of the book when you get it to get the E-Book versions.
Yes, I know it's been a really really long time.
On the bright side, we produced twice as much content as we had set out to do and that was with keeping things as concise as we
could get away with, still managing to cover more than we set out to cover, and stripping out as many unnecessary words as we could muster.
So 520 pages and almost 2 years later, this is where we are.
A good chunk of the additional bulk of the book was the appendices which are about 150 pages
total and focus strictly on PostgreSQL and SQL. After many comments from early reviewers, we thought it unfair not to have a good chunk of PostgreSQL
and just general relational database content to familiarize programmers and GIS folks with the RDBMS that PostGIS lives in. Most GIS folk unfortunately
have the hardest time with getting up to speed with SQL and just standard RDBMS management.
Two free chapters and accompanying code for all chapters
The two free chapters we selectively picked because we thought they would be most beneficial to newcomers and people new to relational databases.
So the free chapters are:
- Chapter 1: What is a spatial database? Which provides a fast paced history of PostGIS, PostgreSQL, Spatial Databases and moves into
an even faster journey into converting flat file restaurant locations to spatial point geometries, loading in an ESRI shapefile of roads. Then shows you how to write standard
spatial queries and render the results.
- Appendix C: SQL Primer -- goes through querying information_schemas, the common points of writing SELECT, INSERT, UPDATE, DELETE SQL statements and the finer points of using aggregate functions, Windowing constructs and common table expressions as well
as a brief overview of how PostgreSQL stacks up with other relational databases (SQL Server, Oracle, IBM DB2, MySQL, Firebird) in SQL features.
- All the chapter code and accompanying data. It's a bit hefty at 57 MB.
So even if you don't buy our book, we hope you find the free chapters useful.
You can get a more detailed listing of all the chapters from the PostGIS in Action book site.
We'd like to thank all those who supported us through this long and unpredictable journey. Hopefully we'll have several more, though hopefully
a bit less nerve-racking than this first one.
Monday, February 21. 2011
Printer Friendly
We were setting up another SQL Server 2005 64-bit where we needed a linked server connection to our PostgreSQL 9.0 server. This is something we've done before so not new and something we documented in
Setting up PostgreSQL as a Linked Server in Microsoft SQL Server 64-bit.
What was different this time is that we decided to use the latest version of the new PostgreSQL 64-bit drivers now available main PostgreSQL site http://www.postgresql.org/ftp/odbc/versions/msi/.
Sadly these did not work for us. They seemed to work fine in our MS Access 2010 64-bit install, but when used via SQL Server, SQL Server would choke with a message:
Msg 7350, Level 16, State 2, Line 1 Cannot get the column information from OLE DB provider "MSDASQL"
If you tried to do a query with them. You can however see all the tables via the linked server tab.
Continue reading "SQL Server 64-bit Linked Server woes"
Saturday, February 19. 2011
Printer Friendly
QuestionYou have a system of products and categories and you want a product to be allowed to be in multiple categories, but you want a product to only be allowed to be in one main category.
How do you enforce this rule in the database?
Some people will say -- why can't you just deal with this in your application logic. Our general reason is that much of our updating doesn't happen at our application level. We like enforcing rules at the database
level because it saves us from ourselves. We are in the business of massaging data. For this particular kind of example we wanted to make sure the database would provide us a nice safety net so that
we wouldn't accidentally assign a product in two main categories.
Answer
There are two approaches we thought of. One is the obvious have a primary category column and a bridge table that has secondary categories. That is an ugly solution because when you do a query you have to do a union
and always treat the secondary categories as different from the main. For most use-cases we don't usually care about distinguisihing primary from secondary category.
The solution we finally settled on was to have one bridge table with a boolean field for if its the main category. We enforce the only one main category requirement using a partial index. Now not all databases support partial indexes
This is one major value of using PostgreSQL that you have so many more options for implementing logic.
As some people noted in the comments and the reddit entry. SQL Server 2008 has a similar feature called Filtered Indexes. Though PostgreSQL has had partial indexes for as far back as I can remember. This brings up an interesting point which I have observed -- if you were using PostgreSQL before, you would already know how to use the Filtered Indexes, Multi row inserts introduced in SQL Server 2008, and the SEQUENCES feature coming in SQL Server 2010. So we should all use PostgreSQL, because it teaches us how to use the newer versions of SQL Server before they come out. :)
So how does the partial index solution look: NOTE for simplicity, we are leaving out all the complimentary tables and the foreign key constraints that we also have in place.
CREATE TABLE products_categories
(
category_id integer NOT NULL,
product_id integer NOT NULL,
main boolean NOT NULL DEFAULT false,
orderby integer NOT NULL DEFAULT 0,
CONSTRAINT products_categories_pkey PRIMARY KEY (category_id, product_id)
);
CREATE UNIQUE INDEX idx_products_categories_primary
ON products_categories
USING btree
(product_id)
WHERE main = true;
Testing it out. It saves us and gives us a nice informative message to boot.
INSERT INTO products_categories(category_id, product_id, main)
VALUES (1,2,true), (3,2,false), (3,3,true), (4,2,true);
ERROR: duplicate key value violates unique constraint "idx_products_categories_primary"
DETAIL: Key (product_id)=(2) already exists.
Tuesday, December 28. 2010
Printer Friendly
In a prior article we did a review of PostgreSQL 9 Admin Cookbook, by Simon Riggs and Hannu Krosing. In this article
we'll take a look at the companion book PostgreSQL 9 High Performance by Greg Smith.
Both books are published by Packt Publishing and can be bought directly from Packt Publishing or via Amazon. Packt is currently running a 50% off sale if you
buy both books (e-Book version) directly from Packt. In addition Packt offers free shipping for US, UK, Europe and select Asian countries.
For starters: The PostgreSQL 9 High Performance book is a more advanced book than the PostgreSQL 9 Admin Cookbook and is more of a sit-down book. At about 450 pages, it's a bit longer than the PostgreSQL Admin Cookbook. Unlike the PostgreSQL 9 Admin Cookbook, it is more a concepts book and much less of a cookbook.
It's not a book you would pick up if you are new to databases and trying to feel your way thru PostgreSQL, however if you feel comfortable with databases in general, not specific
to PostgreSQL and are trying to eek out the most performance you can it's a handy book. What surprised me most about this book was how much of it is not specific to PostgreSQL, but in fact hardware considerations that are pertinent to most relational databases.
In fact Greg Smith, starts the book off with a fairly
shocking statement in the section entitled PostgreSQL or another database? There are certainly situations where other database solutions will perform better. Those are words you will rarely hear from die-hard PostgreSQL users, bent on defending their database
of choice against all criticism and framing PostgreSQL as the tool that will solve famine, bring world peace, and cure cancer if only everyone would stop using that other thing and use PostgreSQL instead:).
That in my mind, made this book more of a trustworthy reference if you came from some other DBMS, and wanted to know if PostgreSQL could meet your needs comparably or better than what you were using before.
In a nutshell, if I were to contrast and compare the PostgreSQL 9 Admin Cookbook vs. PostgreSQL High Performance, I would say the Cookbook is a much lighter read focused on getting familiar with and getting the most out of the software (PostgreSQL), and PostgreSQL High Perofrmance is focused
on getting the most out of your hardware and pushing your hardware to its limits to work with PostgreSQL. There is very little overlap of content between the two and as you take on more sophisticated projects, you'll definitely want both books on your shelf. The PostgreSQL 9 High Perofrmance book isn't going to teach you
too much about writing better queries,day to day management, or how to load data etc, but it will tell you how to determine when your database is under stress or your hardware is about to kick the bucket and what is causing that stress. It's definitely a book you want to have if you plan to run large PostgreSQL databases or a high traffic
site with PostgreSQL.
PostgreSQL 9 High Performance is roughly about 25% hardware and how to choose the best hardware for your budget, 40% in-depth details about how PostgreSQL works with your hardware and trade-offs made by PostgreSQL developers to get a healthy balance of performance vs. reliability, and another 35% about various useful monitoring
tools for PostgreSQL performance and general hardware performance. Its focus is mostly on Linux/Unix, which is not surprising since most production PostgreSQL installs are on Linux/Unix. That said there is some coverage of windows
such as FAT32/NTFS discussion and considerations when deploying terabyte size databases on Windows and issues with shared memory on Windows.
Full disclosure: I got a free e-Book copy of this book just as I did with PostgreSQL 9 Admin Cookbook.
Continue reading "PostgreSQL 9 High Performance Book Review"
Friday, December 24. 2010
Printer Friendly
Question: You have a table of people and a table that specifies the activities each person is involved
in. You want to return a result that has one record per person and a column that has a listing of activities for each person
separated by semicolons and alphabetically sorted by activity. You also want the whole set alphabetically sorted by person's name.
This is a question we are always asked and since we mentor on various flavors of databases,
we need to be able to switch gears and provide an answer that works on the client's database. Most
often the additional requirement is that you can't install new functions in the database. This means that
for PostgreSQL/SQL Server that both support defining custom aggregates, that is out as an option.
Normally we try to come up with an answer that works in most databases, but sadly the only solution that works in
most is to push the problem off to the client front end and throw up your hands and proclaim -- "This ain't something that should be
done in the database and is a reporting problem." That is in fact what many database purists do, and all I can say to them is wake up and smell the coffee before you are out of a job.
We feel that data
transformation is an important function of a database, and if your database is incapable of massaging the data into a format
your various client apps can easily digest, WELL THAT's A PROBLEM.
We shall now document this answer rather than trying to answer for the nteenth time. For starter's
PostgreSQL has a lot of answers to this question, probably more so than any other, though some are easier to execute than others
and many depend on the version of PostgreSQL you are using. SQL Server has 2 classes of answers neither of which is terribly appealing,
but we'll go over the ones that don't require you to be able to install .NET stored functions in your database since we said that is often a requirement.
MySQL has a fairly
simple, elegant and very portable way that it has had for a really long time.
Continue reading "String Aggregation in PostgreSQL, SQL Server, and MySQL"
Monday, December 20. 2010
Printer Friendly
In Part 2 of PL/R we covered how to build PL/R functions that take arrays and output textual outputs of generated R objects. We then used this in an aggregate SQL query using array_agg. Often when you are building PL/R functions
you'll have R functions that you want to reuse many times either inside a single PL/R function or across various PL/R functions.
Unfortunately, if you wanted to call a PL/R function from another PL/R function, this is not possible unless you are doing it from spi.execute call.
There is another way to embed reusable R code in a PostgreSQL database.
In order to be able to share databases stored R code across various PL/R functions, PL/R has a feature called a plr_module. In this tutorial
we'll learn how to create and register shareable R functions with plr_module. In the next part of this series we'll start to explore generating graphs with PL/R.
Continue reading "PL/R Part 3: Sharing Functions across PL/R functions with plr_module"
Friday, December 10. 2010
Printer Friendly
In Intro to PL/R and R, we covered how to enable PL/R language in the database and wrote some PL/R functions
that rendered plain text reports using the R environment. What makes combining R and PostgreSQL in
PL/R most powerful is when you can start writing SQL summary queries that use R functions like any other SQL function.
In this next example, we'll be using PostGIS test runs from tests we autogenerated from the Official PostGIS documentation (Documentation Driven Testing (DDT))
as described in the Garden Test section of the PostGIS Developer wiki.
We've also updated some of our logging generator and test patterns so future results may not represent what we demonstrated in the last article.
On a side note: Among the changes in the tests was to introduce more variants of the Empty Geometry now supported by PostGIS 2.0.
Our beloved PostGIS 2.0 trunk is at the moment somewhat unstable when working with these new forms of emptiness and stuffing geometries in inappropriate places. At the moment it doesn't survive through the mindless machine gun battery of tests we have mercilessly inflicted.
It's been great fun trying to build a better dummy while watching Paul run around patching holes to make the software more dummy proof as the dummy stumbles across questionable but amusing PostGIS use cases not gracefully handled by his new serialization and empty logic.
On yet another side note, it's nice to
see that others are doing similar wonderful things with documentation. Check out Euler's comment on catalog tables
where he uses the PostgreSQL SGML documentation to autogenerate PostgreSQL catalog table comments using OpenJade's OSX to convert the SGML to XML and then XSL similar to what we did with PostGIS documentation to autogenerate PostGIS function/type comments and as a platform
for our test generator.
For our next exercises we'll be using the power of aggregation to push data into R instead of pg.spi.execute. This will make our functions far more reusable and versatile.
Continue reading "PL/R Part 2: Functions that take arguments and the power of aggregation"
Sunday, November 28. 2010
Printer Friendly
In this article we'll provide a summary of what PL/R is and how to get running with it. Since we don't like repeating ourselves,
we'll refer you to an article we wrote a while ago which is still fairly relevant today called Up and Running with PL/R (PLR) in PostgreSQL: An almost Idiot's Guide
and just fill in the parts that have changed. We should note that particular series was more geared toward the spatial database programmer (PostGIS in particular). There is a lot of overlap between the PL/R, R, and PostGIS user-base which is comprised
of many environmental scientists and researchers in need of powerful charting and stats tools to analyse their data who are high on the smart but low on the money human spectrum.
This series will be more of a general PL/R user perspective. We'll follow more of the same style we did with Quick Intro to PL/Python. We'll end our series with a PL/R cheatsheet similar to what we had for PL/Python.
As stated in our State of PostGIS article, we'll be using log files we generated from our PostGIS stress tests. These stress tests were auto-generated from the PostGIS official documentation.
The raster tests are comprised of 2,095 query executions exercising all the pixel types supported. The geometry/geograpy tests are comprised of 65,892 spatial SQL queries exercising every PostGIS geometry/geography supported in PostGIS 2.0 -- yes this includes TINS, Triangles,Polyhedral Surfaces, Curved geometries and all dimensions of them.
Most queries are unique. If you are curious to see what these log tables look like or want to follow along with these exercises, you can download the tables from here.
What is R and PL/R and why should you care?
R is both a language and an environment for doing statistics and generating graphs and plots. It is GNU-licensed and a common favorite of Universities and Research institutions. PL/R is a procedural language for PostgreSQL that allows you to write database stored functions
in R. R is a set-based and domain specific language similar to SQL except unlike the way relational databases treat data, it thinks of data as matrices, lists and vectors. I tend to think of it as a cross between LISP and SQL though more experienced Lisp and R users will probably disagree with me on that. This makes it easier in many cases to tabulate data both across columns as well as across rows.
The examples we will show in these exercises, could be done in SQL, but they are much more succinct to write in R. In addition to the language itself, there are a whole wealth of statistical and graphing functions available in R that you will not
find in any relational database. These functions are growing as more people contribute packages. Its packaging system called Comprehensive R Archive (CRAN) is similar in concept to Perl's CPAN and the in the works PGXN for PostgreSQL.
Continue reading "Quick Intro to R and PL/R - Part 1"
Tuesday, November 23. 2010
Printer Friendly
I've always enjoyed dismantling things. Deconstruction was a good way of analyzing how things were built by cataloging all the ways
I could dismantle or destroy them. I experimented with mechanical systems, electrical circuitry, chemicals and biological systems sometimes coming close to bodily harm. In later years I decided to play it safe and just stick with programming and computer simulation
as a convenient channel to enjoy my destructive pursuits.
Now getting to the point of this article.
In later articles, I'll start to demonstrate the use of PL/R, the procedural language for PostgreSQL that allows you to program functions in the statistical language and Environment R. To
make these examples more useful, I'll be analyzing data generated from PostGIS tests I've been working on for stress testing the upcoming PostGIS 2.0. PostGIS 2.0 is a major
and probably the most exciting release for us. Paul Ramsey did a summary talk recently of Past, Present, Future of PostGIS at State of PostGIS FOSS4G Japan http://www.ustream.tv/recorded/10667125
which provides a brief glimpse of what's in store in 2.0.
Continue reading "The State of PostGIS, Joys of Testing, and PLR the Prequel"
Sunday, November 21. 2010
Printer Friendly
Backup and Restore is probably the most important thing to know how to do when you have a database with data you care about.
The utilities in PostgreSQL that accomplish these tasks are pg_restore, pg_dump, pg_dumpall, and for restore of plain text dumps - psql.
A lot of the switches used by pg_dump, pg_restore, pg_dumpall are common to all three. You use pg_dump to do backups of a single database or select database objects and pg_restore to restore it either to another database or to recover portions of a database. You use pg_dumpall to dump all your databases in plain text format.
Rather than trying to keep track of which switch works with which, we decided to combine all into a single cheat sheet with a column denoting which utility the switch is supported in.
Pretty much all the text is compiled from the --help switch of each.
We created a similar Backup and Restore cheatsheet for PostgreSQL 8.3 and since then some new features have been added such as the jobs parallel restore feature in 8.4. We have now created an updated sheet to comprise all features present in PostgreSQL 9.0 packaged pg_dump, pg_restore, pg_dumpall command line utilities.
PDF Portrait version 8.5 x 11" of this cheatsheet is available at PostgreSQL 9.0 Dump Restore 8.5 x 11 and also available in
PDF A4 format and HTML.
As usual please let us know if you find any errors or omissions and we'll be happy to correct.
Wednesday, November 03. 2010
Printer Friendly
I was excited when PostgreSQL 9 Admin Cookbook by Simon Riggs and Hannu Krosing and the companion book PostgreSQL 9 High Performance, by Greg Smith
were available. All three authors are well known experts in the PostgreSQL community and consultants at 2ndQuadrant, so you are sure to learn a lot from both books.
Both books are published by Packt Publishing and can be bought directly from Packt Publishing or via Amazon. Packt is currently running a 50% off sale if you
buy both books (e-Book version) directly from Packt. In addition Packt offers free shipping for US, UK, Europe and select Asian countries. The pair of books make attractive companions.
The main thing I felt missing in this duo was a book dedicated to PostgreSQL: The platform
that would cover all the various PL languages and the various neat ways PostgreSQL is used and has been extended by many to do things one would not normally expect of a database.
Some day perhaps someone will write such a book.
This article is a review about PostgreSQL 9 Admin Cookbook and we'll be following up later with PostgreSQL 9 High Performance.
This is my first book review. I have a lot of patience for writing, but little patience
when it comes to reading. That said, I found PostgreSQL 9 Admin Cookbook an easy and enjoyable read,
and a book that I managed to learn more tricks from than I care to admit. It is a handy book to have for reference regardless of if you consider yourself
a novice, intermediate or advanced user.
As the book title suggests, it's a cookbook, but a cookbook that combines a question and answer style with a discussion
style of writing. The tasks are neatly categorized into 12 chapters and each task smoothly builds on previous tasks discussed.
It is still categorized in such a way that you can jump to a particular task you are currently having problems with without having read the other parts of the book.
Although it is titled PostgreSQL 9 -- it covers earlier versions as well.
Continue reading "PostgreSQL 9 Admin Cookbook Book Review"
|