Sunday, January 27. 2013
Printer Friendly
Despite all the nasty things people say about MS Access, it does have fairly database agnostic, easy to use import and export tools and a not too shabby query wizard and query tool. But of course, it's not a server side database so at a certain point won't scale for database storage. You can still use it as a front-end to a server-side database such as PostgreSQL or SQL Server. So once you outgrow it for data storage, you'll probably want a quick way to export your data out. Unfortunately, well at least in the 2003 version, while you can bulk link tables from an ODBC connection, you can only export one table at a time using the "select table" right-click export. In this article we'll show a quick and dirty export MS Access Visual basic sub routine we use to export all tables. This we've only tested on MS Access 2003, so if you are using higher, you might need to customize it a bit. This script should work fine for exporting to any database connection with modification of the connection string.
Continue reading "How to bulk export tables from MS Access"
Monday, November 12. 2012
Printer Friendly
Issue: Ever had the situation where you decided to delete a table or some other function or type but you weren't absolutely sure if other functions in the database depended on these.
For things like tables and views that depend on a function, type or other table/view, PostgreSQL won't allow you to delete the dependency object without doing a drop cascade.
For stored procedures though, while it will prevent you from deleting a dependency type if the function returns or takes as input the object to be deleted, it doesn't save you if the body of the function references these objects. This dependency information is not always known and in fact may be dynamic with dynamically generated sql or schema path settings.
So how do you inspect functions for usage of other items?
Solution:
PostgreSQL has a table called pg_catalog.pg_proc which stores the source code of functions (non-C) in the prosrc column.
So lets say you had a table called employees you want to get rid of or simply rename, to find out all functions that reference the term "employees", you would do something like this:
SELECT proname, proargnames, prosrc
FROM pg_proc
WHERE prosrc ILIKE '%employees%';
Friday, June 08. 2012
Printer Friendly
I recently had the need to figure out which ranges of my keys were contiguously numbered. The related exercise is finding gaps in data as well.
Reasons might be because you need to determine what data did not get copied or what records got deleted. There are lots of ways of accomplishing this, but this is the
first that came to mind. This approach uses window aggregates lead function and common table expressions, so requires PostgreSQL 8.4+
Continue reading "Finding contiguous primary keys"
Wednesday, January 25. 2012
Printer Friendly
PostgreSQL has aggregate functions called bool_and and bool_or which it's had for as far back as I can remember.
What do they do? given rows of premises (things that resolve to booleans), bool_and will return true if all of the premises are true. Similarly bool_or
will return true if any of the premises in the set of rows is true. What if however your boolean expressions are not in rows, but instead passed in
as a sequence of arbitrary statements of questionable fact. We want a function like bool_or or bool_and that takes an arbitrary number of boolean arguments. Are there functions
that fit the bill. Indeed there are, but they don't scream out and say I work with booleans because they fit into a class of function we
discussed in The wonders of Any Element and that also happen to be variadic functions.
These are none other than
greatest and least and they are old timer functions that you can find in most versions of PostgreSQL. We'll demonstrate how to use
all 4 with booleans in this article. It must be said that greatest and least are much more useful when applied to other data types like dates
and numbers, but we were amused at the parallel with booleans.
Side note: we've started to write our book on PostgreSQL that will be published by O'Reilly. We hope to finish this book within the next 5-7 months but have preliminary e-Book drafts before then for early purchasers to scan.
The focus of the book will be PostgreSQL 9.1 with some highlights the upcoming PostgreSQL 9.2. Of course oldie but goodie topics are in the book too. It's a thrill to be writing again.
Continue reading "True or False every which way"
Monday, January 16. 2012
Printer Friendly
If I could name a number one feature I love most about PostgreSQL, it's the table inheritance feature which we described in How to Inherit and Uninherit. A lot of people use it for table partitioning using CONSTRAINT EXCLUSION. Aside from that, in combination with PostgreSQL schema search_path (customizable by user and/or database) it makes for a very flexible abstraction tool. For example, for many of our web apps that service many departments where each department/client wants to keep a high level of autonomy, we have a schema set aside for each
that inherits from a master template schema. Each department site uses a different set of accounts with the primary schema being that of the department/client so that they are hitting their own tables.
Inheritance allows us to keep data separate,do roll-up reports if we need to, use the same application front-end, and yet allows us the ability to add new columns in just one place (the master template schema). It is more flexible than other approaches because for example we may have a city organization that need to share tables, like for example a system loaded list of funding source shared across the agency. We can set aside these shared tables in a separate schema visible to all or have some have their own copy they can change if they don't want to use the shared one.
Every once in a while, we find ourselves needing to query the whole hierarchy and needing to know which table the results of the query are coming from. To help
solve that issue, we employ the use of the system column tableoid which all user tables have. The tableoid is the the object id of a table. PostgreSQL has many system columns that you have to explicitly select
and can't be accessed with a SELECT * with the tableoid being one of them. These are: tableoid, cmax,cmin, xmin,xmax,ctid which are all described in System Columns. The PostgreSQL docs on inheritance have examples of using it, but we thought it worthwile to repeat the exercise since it's not that common knowledge and is unique enough feature of PostgreSQL that others coming from other relational databases, may miss the treat. I've often demonstrated
it to non-PostgreSQL users who use for example SQL Server or MySQL, and they literally fall out of their chair when I show the feature to them and its endless possibilities.
Continue reading "Table Inheritance and the tableoid"
Monday, November 21. 2011
Printer Friendly
In this day and age of clouds and what not, it is still important to keep tabs on your database growth to ensure it doesn't
outgrow your disks or so that you can be prepared when it does. A common question that is asked is how big your database is or all your databases combined.
Luckily PostgreSQL has had for a long time functions to help you answer this easily. In this little article,
I'll demonstrate how to answer these questions and more with some PostgreSQL built-in functions
Continue reading "How big is my database and my other stuff"
Thursday, November 03. 2011
Printer Friendly
As mentioned in Lessons Learned Packaging PostGIS extensions, I am working on PostGIS 2.0.0 extension packaging. One of the things I wanted to know was what objects, types, functions etc were installed by my extension. The new packaging system allows for cataloging this relatively easily, but I couldn't find a function or view for this and didn't see one mentioned in the manual, so I created this query which seems to work pretty well as far as I can tell. The basic idea being that any object that an extension depends on
that is not an extension is part of the extension package.
SELECT c.relname As item_type,
COALESCE(proc.proname,typ.typname, cd.relname, op.oprname,
'CAST(' || cs.typname || ' AS ' || ct.typname || ') ', opcname, opfname) As item_name,
COALESCE(proc.proisagg,false) As is_agg, oidvectortypes(proc.proargtypes) As arg_types
FROM pg_depend As d INNER JOIN pg_extension As e
ON d.refobjid = e.oid INNER JOIN pg_class As c ON
c.oid = d.classid
LEFT JOIN pg_proc AS proc ON proc.oid = d.objid
LEFT JOIN pg_type AS typ ON typ.oid = d.objid
LEFT JOIN pg_class As cd ON cd.oid = d.objid
LEFT JOIN pg_operator As op ON op.oid = d.objid
LEFT JOIN pg_cast AS ca ON ca.oid = d.objid
LEFT JOIN pg_type AS cs ON ca.castsource = cs.oid
LEFT JOIN pg_type AS ct ON ca.casttarget = ct.oid
LEFT JOIN pg_opclass As oc ON oc.oid = d.objid
LEFT JOIN pg_opfamily As ofa ON ofa.oid = d.objid
WHERE d.deptype = 'e' and e.extname = 'postgis'
ORDER BY item_type, item_name;
The output looks like:
Continue reading "What objects are in a PostgreSQL installed extension"
Tuesday, November 01. 2011
Printer Friendly
Ever have the need to create a holding table say spreadsheet data with say 100 columns. You need to create a table to hold this stuff. Or perhaps you were feeling in a sadist mood and wanted to abuse your PostgreSQL database to see how many columns you can create in a table of a specific data type.
Here is a quick script to do it:
SELECT 'CREATE TABLE data_import('
|| array_to_string(array_agg('field' || i::text || ' varchar(255)'), ',') || ');'
FROM generate_series(1,10) As i;
SELECT 'CREATE TABLE data_import('
|| string_agg('field' || i::text || ' varchar(255)', ',') || ');'
FROM generate_series(1,10) As i;
Both variants will return output that looks like this:
CREATE TABLE data_import(field1 varchar(255),field2 varchar(255),field3 varchar(255),field4 varchar(255)
,field5 varchar(255),field6 varchar(255),field7 varchar(255)
,field8 varchar(255),field9 varchar(255),field10 varchar(255));
Now if you want it to also execute because you are running it as part of an sql script, you could wrap it in an anonymous function.
DO language 'plpgsql'
$$
DECLARE var_sql text := 'CREATE TABLE data_import('
|| string_agg('field' || i::text || ' varchar(255)', ',') || ');'
FROM generate_series(1,10) As i;
BEGIN
EXECUTE var_sql;
END;
$$ ;
Thursday, October 27. 2011
Printer Friendly
One of the main features I love about PostgreSQL is its array support. This is a
feature you won't find in most relational databases, and even databases that support some variant
of it, don't allow you to use it as easily. It is one of the features that makes building
aggregate functions wicked easy in PostgreSQL with no messy compiling required.
Aside from building aggregate functions, it has some other common day uses.
In this article, I'll cover two common ways we use them which I will refer to as the ANY
and Contains tricks.
I like to think of this approach as YeSQL programming style: how SQL can be augmented by more complex data types and index retrieval mechanisms.
Arrays and many other data types (spatial types, keyvalue (hstore), ltree etc) are far from relational structures, yet we can query them easily with SQL and can even relate them.
Continue reading "PostgreSQL Array: The ANY and Contains trick"
Monday, September 26. 2011
Printer Friendly
UPDATE Turns out there is a simpler way of getting rid of roles that have explicit permissions to objects as Tom Lane pointed out in the comments.
DROP OWNED BY some_role;
Will drop the permissions to objects a user has rights to even if they don't own the objects. Of course this needs to be applied with caution since it will drop tables
and other things you don't want necessarily dropped. So it is best to first run a:
REASSIGN OWNED BY some_role TO new_role;
And then run the DROP OWNED BY.
The REASSIGN OWNED BY which is what we did originally is not sufficient since it doesn't drop the permissions or reassign
them as we assumed it would. This is noted in the docs.
And then you will be allowed to
DROP ROLE some_role
One of the things that is still tricky in PostgreSQL is permission management. Even though 9.0 brought us default privileges and the like, these permissions aren't
retroactive so still a pain to deal with if you already have objects defined in your database.
One of the annoyances we come across with is deleting roles. Lets say you have a role and it has explicit permissions to an object.
PostgreSQL won't allow you to delete this role if it owns objects or has explicit permissions to objects. In order to delete it seems you have
to go in and clear out all those permissions. To help with that -- we wrote a quickie script that will generate a script to revoke all permissions on objects
for a specific role. It looks like this:
Continue reading "Bulk Revoke of Permissions for Specific Group/User role"
Sunday, July 31. 2011
Printer Friendly
One of the biggest complaints about software and perhaps even more specifically FLOSS software is the lack of documentation. I'm not talking about those
small little dialogs we throw in our code that rarely see the light of day. I'm talking about stuff you throw in user docs or specifications and so forth
that an analyst or end-user reads.
The main reasons for this lack of documentation is that while everyone seems to want documentation, few really want to pay for it, and it's time consuming to keep
documentation in synch with what the software actually does. Having documentation that is very inconsistent with the software is worse than not having any documentation at all.
A good way to keep documentation up to date is to fold it into the process of developing and testing the software such that changes in software expose errors in the documentation and vice-versa
or that the act of changing the software corrects the documentation.
Part of the way we try to do that on the PostGIS project is to require each function patch to include documentation. We also try to make our documentation executable
by auto-generating PostgreSQL help and
some test suites from the documentation. For example the process of installing documentation-generated function and type descriptions
in PostgreSQL catches inconsistencies between
the implementation and what we actually publish about how the functions work. Even the image generator that builds most of the images in the manual is built with PostGIS as a core of it
so that when our build-bot is rebuilding the documentation it is exercising some PostGIS output functions.
I can't tell you how many times I've flagged changes in the current PostGIS 2.0 release
simply by trying to install the docbook generated PostgreSQL comment descriptions and PostgreSQL complains that the function signature no longer exists that the documentations says should be there.
So then I go back and correct
the documentation or yell at the programmer if the documentation makes more sense than what they coded. On the other side,
its easy to catch what we neglected to document simply by scanning the functions in PostgreSQL and seeing which ones don't have descriptions.
Okay as far as databases goes, many a DB Programmer/Analyst has waltzed into a project only to be befuddled about the meanings of all these tables used by the application.
If you are like us, you don't even want to look at any database documentation that is not part of the definition of the database because you know 99% of the time it's so obsolete
or was just a pipe dream of someone working in a vacuum
that its more useless than not having any
documentation at all. It is nice to have nicely formatted documentation you can read separate from the code,
but even sweeter if you can query the documentation just as easily as you can query the data.
A good way of having up to date documentation is to weave it in as part of the process of defining the structure. For databases
this means using foreign keys, primary keys, and using the commenting features that most relational databases offer these days. For example the databases we commonly work with,
PostgreSQL, SQL Server, MS Access, and even MySQL all allow you to provide descriptions for tables, table columns, and sometimes other objects such as functions and stored procs right in the database. PostgreSQL even allows you to provide descriptions of columns in
views though that's a bit messier to do. Sadly there isn't a consistent way of pulling these descriptions out of the database that will work for all of these. Each has differently defined meta tables it stores these descriptions in.
For thise article, we'll demonstrate how to pull this information from PostgreSQL.
PostgreSQL does offer many useful switches in psql for querying this data, but we'll focus our attention on pulling this data via SQL. It's much easier to incorporate this information
in auto-generated documentation with SQL because you can have more control what you include and how to format it.
Continue reading "Querying table, view, column and function descriptions"
Sunday, May 22. 2011
Printer Friendly
Question: How do you deal with bad stat counts?
You have a large table and the default planner stats collector underestimates distinct counts of a critical query column thus resulting in much less than optimal query plans.
How can you manually set this?
PostgreSQL 9.0 introduced ability to set two settings on table columns: n_distinct and n_distinct_inherited which are described a bit in ALTER TABLE help.
The n_distinct is the estimated number of distinct values for that column
with -1 or any negative number representing a percentage of estimated table count instead of a true count.
n_distinct_inherited is a setting useful for parent tables that denotes the estimated distinct count sum of all a parent's child tables.
By tweaking these settings when they are less than optimal, you can influence the query planner to produce better plans. Why this is necessary is mostly for large
tables where the stat collector will not query the whole table to determine stats. The stats collector generally queries at most 10-30% of a table.
Determine If you need to set counts
It's always nice to have the stat collector do all these things for you especially if you have a table that is constantly updated and distinct counts can fluctuate a lot.
For static tables you may just want to set them manually.
So how do you know whether you should bother or not. Well you can check the current values
the stats collector has with this query:
SELECT tablename, schemaname, attname As colname, n_distinct,
array_to_string(most_common_vals, E'\n') AS common_vals,
array_to_string(most_common_freqs, E'\n') As dist_freq
FROM pg_stats
WHERE tablename = 'table_of_interest'
ORDER BY schemaname, tablename, attname;
You would then compare with your actuals
SELECT count(DISTINCT column_of_interest) FROM table_of_interest;
Will give you the current count.
Setting n_distinct and n_distinct_inherited
You may want to bump this up or down when you set the value. Next to set the column distinct count stats you would do something like below
replacing 50 with the count you computed:
ALTER TABLE table_of_interest
ALTER COLUMN column_of_interest
SET (n_distinct=50);
ALTER TABLE table_of_interest
ALTER COLUMN column_of_interest
SET (n_distinct_inherited=50);
Tuesday, May 10. 2011
Printer Friendly
Question: What is the difference between CURRENT_TIMESTAMP and clock_timestamp()
Answer: CURRENT_TIMESTAMP is an ANSI-SQL Standard variable you will find in many relational databases including PostgreSQL, SQL Server, Firebird, IBM DB2 and MySQL to name a few
that records the start of the transaction. The important thing to keep in mind about it is there is only one entry per transaction so if you have a long running transaction,
you won't be seeing it changing as you go along.
clock_timestamp() is a PostgreSQL function that always returns the current clock's timestamp. I don't think I'm alone in using it for doing simple benchmarking and other things
where for example I need to record the timings of each part of a function within the function using pedestrian RAISE NOTICE debug print statements.
There is another cool way I like using it, and that is for a batch of records each with an expensive function call, benchmarking how long it takes to process each record.
One of the things I'm working on is improving the speed of the tiger_geocoder packaged in PostGIS 2.0. The first root of attack seemed to me would be the normalize_address function
which I was noticing was taking anywhere from 10% to 50% of my time in the geocode process. That's a ton of time if you are trying to batch geocode a ton of records. The thing is
the function is very particular to how badly formed the address is so a whole batch could be held up by one bad apple and since the batch doesn't return until all are processed, it makes
the whole thing seem to take a while.
So rather than looping thru each, I thought it would be cool if I could run the batch, but for each record have it tell me how long it took to process relative to the rest so I could get
a sense of what a problem address looks like. So I wrote this query:
WITH ctbenchmark
AS
(SELECT *,
the_time - COALESCE(lag(the_time) OVER(ORDER BY the_time), CURRENT_TIMESTAMP) As process_time,
the_time - CURRENT_TIMESTAMP As diff_from_start
FROM (SELECT address_1, city, state, zip,
pprint_addy(normalize_address(coalesce(address_1,'') || ', ' || coalesce(city || ' ','') || state || ' ' || zip)) As pp_addr,
clock_timestamp() As the_time
FROM testgeocode LIMIT 1000) As foo )
SELECT *
FROM ctbenchmark
WHERE process_time > '00:00:00.016'::interval;
Which returned an output something like this:
address_1 | city | state | zip | pp_addr | the_time | process_time | diff_from_start
------------------+------------+-------+------- +-------------------------------------------+--------------+------------------
48 MAIN ST .. | S.. | MA | 021.. | 48 MAIN .. | 2011-05-10 03:24:43.078-04 | 00:00:00.032 | 00:00:00.032
15 ... | | MA | 018... | 15 GREN... | 2011-05-10 03:24:50.796-04 | 00:00:00.031 | 00:00:07.75
Saturday, February 19. 2011
Printer Friendly
QuestionYou have a system of products and categories and you want a product to be allowed to be in multiple categories, but you want a product to only be allowed to be in one main category.
How do you enforce this rule in the database?
Some people will say -- why can't you just deal with this in your application logic. Our general reason is that much of our updating doesn't happen at our application level. We like enforcing rules at the database
level because it saves us from ourselves. We are in the business of massaging data. For this particular kind of example we wanted to make sure the database would provide us a nice safety net so that
we wouldn't accidentally assign a product in two main categories.
Answer
There are two approaches we thought of. One is the obvious have a primary category column and a bridge table that has secondary categories. That is an ugly solution because when you do a query you have to do a union
and always treat the secondary categories as different from the main. For most use-cases we don't usually care about distinguisihing primary from secondary category.
The solution we finally settled on was to have one bridge table with a boolean field for if its the main category. We enforce the only one main category requirement using a partial index. Now not all databases support partial indexes
This is one major value of using PostgreSQL that you have so many more options for implementing logic.
As some people noted in the comments and the reddit entry. SQL Server 2008 has a similar feature called Filtered Indexes. Though PostgreSQL has had partial indexes for as far back as I can remember. This brings up an interesting point which I have observed -- if you were using PostgreSQL before, you would already know how to use the Filtered Indexes, Multi row inserts introduced in SQL Server 2008, and the SEQUENCES feature coming in SQL Server 2010. So we should all use PostgreSQL, because it teaches us how to use the newer versions of SQL Server before they come out. :)
So how does the partial index solution look: NOTE for simplicity, we are leaving out all the complimentary tables and the foreign key constraints that we also have in place.
CREATE TABLE products_categories
(
category_id integer NOT NULL,
product_id integer NOT NULL,
main boolean NOT NULL DEFAULT false,
orderby integer NOT NULL DEFAULT 0,
CONSTRAINT products_categories_pkey PRIMARY KEY (category_id, product_id)
);
CREATE UNIQUE INDEX idx_products_categories_primary
ON products_categories
USING btree
(product_id)
WHERE main = true;
Testing it out. It saves us and gives us a nice informative message to boot.
INSERT INTO products_categories(category_id, product_id, main)
VALUES (1,2,true), (3,2,false), (3,3,true), (4,2,true);
ERROR: duplicate key value violates unique constraint "idx_products_categories_primary"
DETAIL: Key (product_id)=(2) already exists.
Tuesday, January 18. 2011
Printer Friendly
In our last article we talked about String Aggregation implementing in PostgreSQL, SQL Server, and MySQL. This task is one that makes purist relational database programmers
a bit squeamish. In this article we'll talk about the reverse of that, how do you deal with data that someone hands you delimeted in a single field and that you are asked to explode or re-sort based on some lookup table.
What are the benefits of having a structure such as? : p_name | activities
--------+--------------------------------
Dopey | Tumbling
Humpty | Cracking;Seating;Tumbling
Jack | Fishing;Hiking;Skiing
Jill | Bear Hunting;Hiking
Well for the casual programmer or simple text file database that knows nothing about JOINS and so forth, it makes it simple to pull a list of people who like Tumbling.
You simply do a WHERE ';' || activities || ';' LIKE '%;Tumbling;%'. It's great for security too because you can determine security with a simple like check and also list all the security groups a member belongs in without doing anything.
Quite easy for even the least data-skilled of programmers to work with because most procedural languages have a split function that can easily parse these into an array useful for stuffing into drop down lists and so forth. As a consultant of semi-techie people
I'm often torn by the dilemma of "What is the way I would program for myself vs. the way that provides the most autonomy to the client". By that I mean
for example I try to avoid heavy-weight things like Wizards that add additional bloated dependencies or slow the speed down of an application. These bloated dependencies may provide ease to the client but make my debugging life harder. So I weight the options
and figure out which way works now and also provides me an easy escape route should things like speed or complexity become more of an issue.
This brings us to the topic of, what is wrong with this model? It can be slow because the LIKE condition you have can't easily take advantage of an index unless using a full text index so not ideal where this is the primary filtering factor. It's also prone to pollution because
you can't easily validate that the values in the field are in your valid set of lookups or if your lookup changes, the text can be forced to change with a CASCADE UPDATE/DELETE RULE etc. In cases where this is of minor consequence
which is many if referential integrity is not high on your list of requirements, this design is not bad. It might make a purist throw up but oh well there is always dramamine to fall back on. As long as you have done your cost benefit analysis, I don't think there should be any shame of following this
less than respected route.
While you may despise this model, it has its place and it's a fact of life that one day someone will hand it to you and you may need to flip it around a bit. We shall demonstrate how to do that in this article.
Continue reading "Reverse String Aggregation: Explode concatenated data into separate rows In PostgreSQL, SQL Server and MySQL"
|