One of the things I'm excited about in PostgreSQL 9.2 are the new pg_dump section - pre-data, data, and post-data options and the exclude-table-data option. Andrew Dunstan blogged about this briefly in pg_dump exclude table data. What is also nice is that pgAdmin III 1.16 supports the section option via the graphical interface . I was a bit disappointed not to find the exclude-table-data option in pgAdmin III interface though.
The other nice thing about this feature is that you can use the PostgreSQL 9.2 dump even against a 9.1 or lower db and achieve the same benefit.
The 9.2 pg_restore has similar functionality for restoring specific sections of a backup too.
So what is all this section stuff for. Well it comes in particularly handy for upgrade scripts. I'll first explain what the sections mean and a concrete example of why you want this.
pre-data - this would be the table structures, functions etc without the constraints such as check and primary key and indexes.
data -- it's uhm the data
post-data - This is all constraints, primary keys, indexes etc.
Use case: sections - convert your play database to a template
Imagine you've been developing your app and have lots of test data, eventually you'll want to deploy your database as a template or a set of scripts. You certainly don't want to include your junk data, so you want your backup to include -- Pre-data and Post-data. Since it will be so light and fluffy, you might as well just output it as plain text so you'd run a command something like
exclude table data option is most useful for as Andrew mentioned, not wasting your backup with huge static data, and yet still being able to backup the structure. It also comes in useful for deploying stuff too like the previous example.
Imagine you are developing an app say an app for managing a store, and you want to deploy data as well like states, taxes etc, but not your junk orders, however you want table structure for your orders etc to be part of your script. One way is to set aside a schema for store data and another for your other data you will package. The schema store would contain all customer data. You would create a script something like this which would have the effect of including all your other data but only dumping out the table structures in your store schema (and not the data).
Yes, pgAdmin doesn't support yet the "exclude data" option. Not because it's difficult, but really because the UI handling of this is hard. We already have a treeview with checkboxes to allow the user to select which tables he wants to dump. Adding another checkbox to each tree's item cannot be done yet with the widget. But I hope we'll be able to do that in the next release.
Sorry didn't give an example of that because its a bit more involved. One example is the case where you need to do some cleanup because constraints have changed.
For example in PostGIS 1.5. had a constraints built with functions that no longer exist in 2.0. Normally the constraint is part of the table definition for regular load. So these tables wouldn't load unless you installed a legacy script.
If you could separate the check constraints out int a separately script -- essentially what the post-data does, then you could load the data and process a revised post-data script separate. There is also the case where you don't want the constraints installed at all.