June 15th, 2010 by depesz | Tags: , , , , , , , | 26 comments »
Did it help? If yes - maybe you can help me? Donate BTC to 19zPa5diT2LZqGtTi8f8bfApLn8rw9zBHx

Every now and then there is someone on IRC, mailing lists, or private contact which asks about rules.

My answer virtually always is: don't use rules. If you think that they solve your problem, think again. Why?

Rules have been controversial for a very long time. I remember Jan Wieck talking on PostgreSQL conference in Poland years ago (I think it was 2000, but I'm not 100% sure), and saying stuff about rewrite or deprecating rule subsystem (sorry, it was long time ago, and my memory is not super sharp, so I don't recall his exact words).

Some people want to remove rules in their whole. Some just don't like to use them. Some like and do use them. Some overview about positions can be found in thread on pgsql-hackers mailing list.

So. What is the main point against rules? That they are very complicated, and not immediately understandable by even the most seasoned DBAs.

What escapes most people is that rules are not conditional engine to run another queries, but a way to modify queries and/or add more queries to flow.

Let's see an example. This (well, similar, not exactly the same letter-for-letter) was actually posted once on #postgresql on IRC. Someone wanted to use rules as insert-or-update. Test code:

CREATE TABLE test (
    id       INT4 PRIMARY KEY,
    some_val INT4 NOT NULL
);
 
CREATE RULE whatever AS ON INSERT TO test
    WHERE exists ( SELECT * FROM test WHERE id = NEW.id )
        DO INSTEAD UPDATE test SET some_val = some_val + 1 WHERE id = NEW.id;

So. After creation, and while having test table empty, I issued:

INSERT INTO test (id, some_val) VALUES (1, 1);

What's the content of test table? If you know – congratulations. Most of people will not grasp at first why it's:

# SELECT * FROM test;
 id | some_val
----+----------
  1 |        2
(1 row)

After 2nd insert with id = 1, some_val will get incremented to 3, correctly. But why is initial value in some_val == 2?

That's because this “WHERE" clause from rule is added to original query, and then second query is being run. Like this:

original query:

INSERT INTO test (id, some_val) VALUES (1, 1);

becomes:

INSERT INTO test (id, some_val)
    SELECT 1, 1 WHERE NOT (
        EXISTS ( SELECT * FROM test WHERE id = 1)
    );
UPDATE test
SET some_val = some_val + 1
WHERE id = 1 AND ( EXISTS ( SELECT * FROM test WHERE id = 1 ) );

Another problem. Let's assume we want to use rules to provide audit of changes in table.

CREATE TABLE test (
    id           SERIAL PRIMARY KEY,
    created      TIMESTAMPTZ NOT NULL,
    some_value   FLOAT
);
CREATE TABLE test_log (
    lid          SERIAL PRIMARY KEY,
    lrecorded    TIMESTAMPTZ,
    loperation   TEXT,
    t_id         INT4,
    t_created    TIMESTAMPTZ,
    t_some_value FLOAT
);
CREATE RULE whatever AS
    ON INSERT TO test
    DO ALSO
        INSERT INTO test_log ( lrecorded, loperation, t_id, t_created, t_some_value )
        VALUES ( now(), 'INSERT', NEW.id, NEW.created, NEW.some_value );

Looks simple, and cannot go wrong. Right? Let's see:

# INSERT INTO test ( created, some_value ) VALUES ( '1410-07-15 12:34:56', 123 );
INSERT 0 1
 
# SELECT * FROM test;
 id |          created          | some_value
----+---------------------------+------------
  1 | 1410-07-15 12:34:56+01:24 |        123
(1 row)
 
# SELECT * FROM test_log;
 lid |           lrecorded           | loperation | t_id |         t_created         | t_some_value
-----+-------------------------------+------------+------+---------------------------+--------------
   1 | 2010-06-15 11:45:56.221673+02 | INSERT     |    2 | 1410-07-15 12:34:56+01:24 |          123
(1 row)

So, everything is right, and the rule is OK? No.

Let's imagine that someone would like to enter random some_value:

# INSERT INTO test ( created, some_value ) VALUES ( '1410-07-15 12:34:56', random() * 500000 );
INSERT 0 1
 
# SELECT * FROM test;
 id |          created          |    some_value
----+---------------------------+------------------
  1 | 1410-07-15 12:34:56+01:24 | 273576.067527756
(1 row)
 
# SELECT * FROM test_log;
 lid |           lrecorded           | loperation | t_id |         t_created         |   t_some_value
-----+-------------------------------+------------+------+---------------------------+------------------
   1 | 2010-06-15 11:47:46.501995+02 | INSERT     |    2 | 1410-07-15 12:34:56+01:24 | 408030.828461051
(1 row)

Oops?! some_value and t_some_value are not the same.

Well. Someone can say – sure, it's random, and nobody sane uses random. Despite the fact that this claim is quite funny, let's change the example. Let's assume we want exact created value in test table. now() as you perhaps know returns value that is time of start of transaction, and within transaction it's constant.

So, if a transaction would take 1 hour, insert to test table, that would use now(), would insert not-really correct “created" timestamp. But we can use clock_timestamp() which returns always current time:

# INSERT INTO test ( created, some_value ) VALUES ( clock_timestamp(), '123' );
INSERT 0 1
 
# SELECT * FROM test;
 id |            created            | some_value
----+-------------------------------+------------
  1 | 2010-06-15 11:51:25.833451+02 |        123
(1 row)
 
# SELECT * FROM test_log;
 lid |           lrecorded           | loperation | t_id |           t_created           | t_some_value
-----+-------------------------------+------------+------+-------------------------------+--------------
   1 | 2010-06-15 11:51:25.741417+02 | INSERT     |    2 | 2010-06-15 11:51:25.833638+02 |          123
(1 row)

Well, difference of 0.0002s is negligible. Right? Sure. But what will happen if this small difference will happen on midnight, thus falsifying day of operation? Or month? Or year?

Sure. It's still possible to avoid this kind of problems by not using volatile functions as source of data. Right. So, let's also say goodbye to sequences, and a lot of other interesting functionality in PostgreSQL.

Next example came from mail, and is really interesting in terms of what happens when people use techniques which they don't really understand (if you are this person – please do not feel offended – I just needed an example for this post).

This person, has table with some columns, and tried to use trigger, which does something essentially like this:

CREATE OR REPLACE FUNCTION whatever() RETURNS TRIGGER AS
$BODY$
BEGIN
    IF NEW.some_field = 'T' THEN
        NEW.some_other_field := NEW.another_field;
    ELSIF NEW.some_value = 'F' THEN
        NEW.some_other_field := - NEW.another_field;
    ELSE
        NEW.some_other_field := 0;
    END IF;
    RETURN NEW;
END;
$BODY$
LANGUAGE 'plpgsql';
CREATE TRIGGER xxx BEFORE INSERT ON some_table FOR EACH ROW EXECUTE PROCEDURE whatever();

Simple and to the point.

But this guy also wanted to use rule. So he created this:

CREATE OR REPLACE RULE whatever AS
    ON INSERT TO some_table
        DO ALSO
            UPDATE some_table
            SET some_other_field = (
                CASE
                WHEN NEW.some_other_field='T' THEN (NEW.another_field)
                WHEN NEW.some_other_field='F' THEN (-NEW.another_field)
            ELSE
                0
            END
;

And he doesn't understand why it's slower.

Well. For starters – rule approach runs 2 queries, when trigger based only one. Second – because it does insert and then update – it will effectively bloat the table.

And this also doesn't even considers what happens if value in “another_field" is passed as function call, where function can be volatile, or simply take non trivial amount of time.

One last note. I was pointed to test made by my friend – which shows that for simple partitioning rules are faster.

I took a look at it. And the numbers don't really add up. I'm talking about “Partitioning with no dynamic routing".

I copied Filip script to my desktop, and ran them. Results:

  • rule based approach: 445.540 ms
  • trigger based approach: 5966.436 ms

This is even worse than what Filip reported. But then I took a look at the trigger, and noticed (sorry Filip), that it's absolutely horrific. With this code, it's no wonder it behaves so bad.

So, I changes the trigger function to:

CREATE OR REPLACE FUNCTION trg_mainlog_partitioner() RETURNS TRIGGER AS $$
DECLARE
    use_sql TEXT;
BEGIN·
    IF NEW.sel < 1 OR NEW.sel > 1000 THEN
        RAISE EXCEPTION 'trg_mainlog_partitioner error: mainlog.sel value NOT supported: %', NEW.sel;
    END IF;
    use_sql := 'INSERT INTO mainlog_p' || NEW.sel || ' (id, sel, tm, info) VALUES ($1, $2, $3, $4)';
    execute use_sql USING NEW.id, NEW.sel, NEW.tm, NEW.info;
    RETURN NULL;
END;
$$ language plpgsql;

The change is simple – instead of 1000 IF/ELSIF branches, I just make 1 IF for sanity check, and then I build dynamic query.

Result – the same insert took only 649.677 ms.

While it's still longer than rules based approach, I believe this is mostly irrelevant, as the difference is small, and what's more important – this test inserted 10000 rows with single insert command. Which is best situation for rule. But in standard systems – it's not that common. I would much more expect 10000 insert statements, than single 10k-row insert, except perhaps some data migration processes.

Just out of curiosity, I decided to test it.

Loaded Filip rules to one database, and my trigger to another, and then ran this:

$ perl -e '
    for my $i (1..1000) {
        for (1..10) {
            printf "INSERT INTO mainlog(sel,info) VALUES (%u, %u);\n", $i, $i;
        }
    }' | time psql -d partitioning_triggers
..... (10000 lines of INSERT 0 0)
0.11user 0.07system 1:02.76elapsed 0%CPU (0avgtext+0avgdata 6368maxresident)k
0inputs+0outputs (0major+491minor)pagefaults 0swaps

And then I ran the same, but to database with rules:

$ perl -e '
    for my $i (1..1000) {
        for (1..10) {
            printf "INSERT INTO mainlog(sel,info) VALUES (%u, %u);\n", $i, $i;
        }
    }' | time psql -d partitioning_rules
..... (10000 lines of INSERT 0 0)
0.10user 0.07system 5:36.86elapsed 0%CPU (0avgtext+0avgdata 6368maxresident)k
0inputs+0outputs (0major+491minor)pagefaults 0swaps

Whoa. Now the proportion reversed, and rules are 5 times worse than trigger. Why is it so?

With out 1000-partition system, each insert is changed by rules into over 1000 inserts. Nearly all of them (all except one) will not insert anything, but still have to be evaluated!.

With trigger based approach, we simply run some procedural, precompiled code to check partitioning criteria, and then run another query via EXECUTE. It's just faster.

So. To wrap it all: I believe that one shouldn't use rules. They are complex, and lead to not easy to find out errors. They can cause table bloat, and they are slower than triggers for not-trivially-small scales of partitioning.

There are some cases when rules don't have alternative (like David Fetter pointed) – namely: writable views. There has been some work put to it, but as of now – it inevitable – if you want writable views – you have to use rules. Just ask yourself – are you really so smart, that you can write rules which will be without problems/restrictions on input data/table bloat? And will all your successors taking care of this particular database as smart as you?

  1. 26 comments

  2. Jun 15, 2010

    my “rules vs triggers” post was updated in answer to your critic :-)

  3. Jun 15, 2010

    I actually like the DO INSTEAD stuff a lot. It allows me to transparently do magic stuff like automatic historisation of changed data etc. So an. UPDATE turns into a COPY THE OLD DATA, INSERT THE CHANGED DATA. Allowed us to do amazing stuff for the official ship register of Switzerland.

  4. Jun 15, 2010

    @Marc:
    I understand that rules let people do cool stuff. But in my personal opinion – drawbacks are more substantial than gains.
    Of course – this applies to “general situation”. In specific cases you might be put in situation when you simply don’t have choice, or, in this particular case, benefits are more important.

  5. Jun 15, 2010

    From Troels database comparison:

    Postgres –

    Has views. Breaks that standard by not allowing updates to views; offers the non-standard ‘rules’-system as a work-around.

    So, the reason DBA’s don’t like ‘em is they’re way non-standard, and many DBA’s have to deal with multiple database engines; The Big Three behave mostly OK together at the SQL level (not so much at the config/tuning level, though). Has been so from the beginning (was Stonebraker’s idea, IIRC). Getting to SQL-92 would be good; SQL-99 would be better. And even The Big Three differ on what’s an updateable view, and SQL-2008 has some rules for them, but I’ve not seen them.

    Your first example is, essentially, the MERGE verb (OK, not -99 but -2003, that’s still 7 years ago).

    If one wishes to make BCNF databases, while isolating client code from the schema, views are essential. While better than MySql, by a mile, PostgreSQL does need to become at least -99 compliant if it wants to be a threat to The Big Three.

    Among Big Three developers, triggers are often disliked, if they run at row level. My docs tell me that PostgreSQL has STATEMENT syntax, but not statement implementation. Having that would be a Good Thing.

  6. Jun 15, 2010

    @Robert:
    First – I’m not sure what you mean by ‘has statement syntax, but not statement implementation’.

    I’m also not sure what you mean by ‘the big three’ – oracle/mssql/ ?

    Personally I don’t really car about “standard”. The reason is very simple – while it’s nice to have standard – even the simplest things are done differently – limit/offset for example.

    My point is – I, as dba, don’t like rules because they are tricky, and often can lead to problems which are hard to detect. Standard compliance is “nice to have”, but definitely not on my list of priorities.

  7. # Stephen Denne
    Jun 15, 2010

    Does the whatever rule on some_table update every row in some_table?

  8. Jun 15, 2010

    @Stephen:
    yes.
    as far as I know, it was intentional – i.e. the table contains only one row.

  9. Jun 16, 2010

    @Depesz:

    Trigger (CREATE) syntax can be by ROW or by STATEMENT. PostgreSQL, according to the docs I’ve seen, accepts STATEMENT, but always executes by ROW; this is a performance issue for triggers which affect multiple rows.

    DB2 is the third Big Three.

    Rules exist in PostgreSQL because, mostly, the SQL standard syntax isn’t implemented. The Big Three get closer to implementing the standard. Going One’s Own Way is otherwise referred to as Vendor Lock In. While not perfect, the SQL standards, given that SQL is the product not of Codd but Chamberlin (he now does XQuery), have gotten closer to what Codd meant. The 2008 standard being closer to what Codd proved: all views are updateable.

    If you don’t like Rules, but want their semantics, then implementing the standard is the answer. Is it not the case that only view implementation requires rules, all else can be done with triggers?

  10. # Teoś
    Jun 16, 2010

    1. Trigger is a function and calling the function costs time, unless it is plain SQL which is inlined (is it?)
    C.f. Pavel Steule blog:http://okbob.blogspot.com/2009/12/how-dont-write-plpgsql-functions.html
    But trigger *must* be PL/pgSQL which is makes it a performance drawback. A logical way for the RULE to act would be on the syntactic level (i.e. before execution). 2. Of course multiple evaluation of inserted values, shown in examples above is against any logic and seems to be rather a bug than a feature.
    3. There is no big sense in ON INSERT DO ALSO UPDATE type of rule. What would make sense is ON INSERT DO INSTEAD INSERT (and subsequent instructions) – which was my intention. Unfortunately this is prohibited as this leads to an infinite recursion of rules that do insert which calls the rule, which does insert etc etc. I don’t buy this explanation. It seems simple to verify if the rule was already applied to this query or not. This does not seem to be a breach of the concept of RULE, whereas in current form – it simply not logical. Instead of rewriting the rule we are forced to use the mechanism of double write which results in doubled resources consumption.

  11. # Teoś
    Jun 16, 2010

    Of course Steule => Stehule in above.

  12. # Marko Tiikkaja
    Jun 16, 2010

    @Robert Young:

    “Trigger (CREATE) syntax can be by ROW or by STATEMENT. PostgreSQL, according to the docs I’ve seen, accepts STATEMENT, but always executes by ROW; this is a performance issue for triggers which affect multiple rows.”

    Maybe it’s time for you to take another look at the docs. Statement-level triggers were added in 7.4; that’s almost 7 years ago.

    “The 2008 standard being closer to what Codd proved: all views are updateable.”

    I haven’t read the spec, but I very much doubt that. Think aggregates, for example.

  13. # Teoś
    Jun 16, 2010

    Of course triggers have larger capabilities than rules but they are heavier at the same time.
    Somebody even suggested rewriting time-critical parts to plain C. http://database.itags.org/postgresql/55687/
    I don’t know if this is really the best solution to all trigger/rule problems.

  14. Jun 16, 2010

    According to “SQL in a Nutshell”, pg. 212, PostgreSQL doesn’t enforce statement level triggers. If that’s known to be true that statement level is enforced, then let them know. It seems unlikely they would make the straightforward assertion without testing, it seems to me.

  15. Jun 16, 2010

    OK, my bad. The book, although recent, refers to 7.0.

  16. # gapsf
    Jun 16, 2010

    What about SELECT rules?
    For example, for access control, I want to check and change SELECTs on the fly.
    And I need a mechanism to rewrite and describe rewriting rules (for SELECTs too).
    The mechanism should allow to setup rewrite rules and rewrite query (syntax tree) before the optimizer.
    For example, to filter out certain records from a query on a particular algorithm.
    It should be faster than using a stored procedure to “decide” what records filter out.

    For example:
    Instead of
    “SELECT * FROM some_table WHERE call_to_some_procedure();” (wich slow)
    it will be possible to generate modifier
    and every DML will be modified before the execution by automatically adding modifier into WHERE clause on the fly.

    What do you think about this technique?
    Is it possible in PostgreSQL?

  17. # Jona
    Jun 16, 2010

    GAPSF, I believe what you’re looking for is “row level security” or what Oracle calls Virtual Private Databases.
    Josh Berkus has previously stated that it should be possible to implement this type of feature using rules but I haven’t seen an example of this.
    See also: http://it.toolbox.com/blogs/database-soup/thinking-about-row-level-security-part-1-30732

    Personally I’d like to see an example of how to use rules to rewrite:
    SELECT * FROM Log.Transaction_Tbl WHERE clientid = 1
    to become:
    SELECT T.* FROM Log.Transaction_Tbl T
    INNER JOIN User.Access_Tbl A ON T.clientid = A.clientid
    WHERE T.clientid = 1 AND A.userid = [A GLOBAL VARIABLE OR OTHER IDENTIFIER PREFERABLY SET ONCE / AT CONNECTION TIME, "CURRENT_ROLE" COULD BE AN OPTION]

    There’s a PostGreSQL project named Veil which attempts to solve this, but it seems overly complicated and maintenance heavy for simple use cases like my example.

  18. Jun 16, 2010

    @gapsf:
    you can easily do it with views. So there is no point in bringing rules just to do it.

    anyway – select rules and views are virtually the same thing, so there is not much point in discussing which one is better.

  19. Jun 16, 2010

    @Robert Young:

    thanks for clarification on big-3.

    As for statement triggers, i think Marko explained it well, and the fact that the book is based on PostgreSQL 7.0 means that it’s virtually useless now – at least when it comes to talking about PostgreSQL. 7.0 was released in 2000, and in this 10 years PostgreSQL has really moved ahead. A lot.

    As for what rukles are required for – currently (afaik) views are internally done using the same mechanism as rules, but this is virtually irrelevant, as select rules are very different beasts than modification rules.

    The only modification thing that rules are now technically required are writable views. And yet – I’m happily *not* using writable views just to avoid using rules and their restrictions.

  20. # Marko Tiikkaja
    Jun 16, 2010

    @Jona:

    You can do that relatively easily with a view and a custom GUC. It will require a C-language module to implement the GUC, but the code should be straightforward.

  21. Jun 16, 2010

    @Teoś:

    ad “trigger is a function and calling”.

    sure. but showing degenerated example on how *not* to write function, as general excuse to never write a function is pretty bad argument. I don’t like Stalin, and he was Russian, so all Russians must be bad people? There is no logic in it.

    ad “There is no big sense in ON INSERT DO ALSO UPDATE type of rule.”
    there would be a lot of sense in such rules, if the rule system would work on values, and not parameters as passed to query, but you’re right – the rule in my post did not make sense. But it was example I got.
    besides, I think I explained – trigger in this case would be *much* more efficient, if only for not bloating the table.

    ad “triggers … than rules … but they are heavier”
    as I showed in partitioning example – this is *far* from true. what’s more – trigger is guaranteed to provide correct values, which (again: as I showed) rules simply cannot do in some cases.
    of course I can forbid usage of random/clock_timestamp in my application, but what about *every* volatile function?

    Anyway – you seem to be sticking to the idea of using rules. Fine. There are also others who use them – you can view the thread I linked. I just don’t think that usage of views is OK for most of the people. And I still don’t see any valid excuse (except for writable views, which as I said, I don’t use) to use rules, instead of something that gives you correct results.

  22. Jun 16, 2010

    @Marko:
    You don’t need C to add custom GUC. What would you need C for in this case?

  23. # Teoś
    Jun 16, 2010

    @Depesz

    I’m not glued to rules in general. I’m not trying to prove that they are better and that they should be used instead of triggers. I only see a good place where rules could be used if they only worked properly.

    I’m also not against writing functions. I’m only saying that calling functions has its cost and they should be avoided if not required for some reason. I remember a statement “do not write functions < 100 lines" (but I don't remember which PostgreSQL guru posted this). This does not come from a fear for unknown but from practice: I recently compared query results between 100k inline conversions (of type y=ax+b) and conversions using PL/pgSQL functions and the inline version was by order of magnitude faster.

    Concerning the "weight" of triggers and rules: I think that you use an argument pars pro toto. If in this example rules were slower than triggers it does not mean that in general they are slower. In ON INSERT … UPDATE – yes, two times slower (but if INSTEAD INSERT – INSERT type was allowed this would look different) but why in general case?
    Rewriting the query – if properly done – should be faster than calling the trigger function once per row to do the same job. Maybe the idea is right but the implementation is wrong?

    Of course the problem of volatile functions in rules is clearly an error and an important fact so thank you for pointing this. And if it is implemented along the definition it probably means that the definition is wrong.

    Anyway, thanks for the job.

  24. # janus
    Jun 23, 2010

    @gapsf: been there, done that, works fine – even with inserts, updates and deletes.

    Yet I’m really looking forward to a native RLS implementation.

  25. # Teoś
    Jul 2, 2010

    Dear Depesz.I have seen exactly your side effect error implemented today and the guys who discuss why is the sequence updated twice. The problem is real. Are you 100% sure that it is by design and should not be reported as error?

  26. Jul 2, 2010

    Yes. Rules rewrite query. Which causes this kinds of problems. But since they have been in here for a long time so change of the way they work is very, very unlikely.

  27. In the example with the transaction log, in addition to the usage of the random function being problematic, the t_id field was actually being logged incorrectly for the same reason; I point this out only because that means the “before we try to break it with a call to random” example is already broken (which caused me to get confused while reading the transition between the first and second example) ;P.

Leave a comment