One of the difficulties in getting to understand and configure
reside in the relation between the
ticker and the replication. This question
was raised once more on IRC yesterday, so I made a new FAQ entry about it:
How do this ticker thing relates to londiste?
What we did implement in the previous article is a cache system, all with
its necessary cache invalidation policy. Sometimes though, the
processing of an event needs to happen within the same transaction where
the event is registered in your system. PostgreSQL makes it possible to
maintain a summary table transactionally thanks to its
support. Today, we’re going to dive in how to maintain a summary table with
triggers, and its impact on concurrency.
This article is extracted from my book Mastering PostgreSQL in Application
Development, which teaches SQL to
developers so that they may replace thousands of lines of code with very
simple queries. The book has a full chapter about Data Manipulation and
Concurrency Control in PostgreSQL, including caching with materialized
views, check it out!
Let’s continue to dive in PostgreSQL Concurrency. In the previous article of
the series, Modeling for
Concurrency, we saw how to model
your application for highly concurrent activity. It was a follow-up to the
article entitled PostgreSQL Concurrency: Isolation and
was a primer on PostgreSQL isolation and locking properties and behaviors.
In this article, we’re going to think about when we should compute results
and when we should cache them for instant retrieval, all within the SQL
tooling. The SQL tooling for handling cache is a MATERIALIZED
and it comes with cache invalidation routines, of course.
Today’s article takes us a step further and builds on what we did last week,
in particular the database modeling for a tweet like application. After
having had all the characters from Shakespeare’s A Midsummer Night’s Dream
tweet their own lines in our database in PostgreSQL Concurrency: Data
Modification Language, it’s time for them
to do some actions on the tweets: likes and retweet.
Of course, we’re going to put concurrency to the test, so we’re going to
have to handle very very popular tweets from the play!
PostgreSQL is a relational database management
system. It’s even the world’s most advanced open source one of them. As
such, as its core, Postgres solves concurrent access to a set of data and
maintains consistency while allowing concurrent operations.
PostgreSQL is a relational database management system. It’s even the world’s most advanced open source one of them. As such, as its core, Postgres solves concurrent access to a set of data and maintains consistency while allowing concurrent operations.
Postgres exposes its concurrency APIs in the SQL language, in particular in the DML parts of it: you can read the Data Manipulation Language chapter of the PostgreSQL docs for all the details.
Today it’s time to conclude our series of PostgreSQL Data
Types articles with a recap. The series cover lots of
core PostgreSQL data types and shows how to benefit from the PostgreSQL
concept of a data type: more than input validation, a PostgreSQL data type
also implements expected behaviors and processing functions.
This allows an application developer to rely on PostgreSQL for more complex
queries, having the processing happen where the data is, for instance when
implementing advanced JOIN operations, then retrieving only the data set
that is interesting for the application.
In order to put the Point datatype in a context where it makes sense, we’re
going to download a complete geolocation data set and normalize it, thus
making good use of both the normalization good practice and those other
PostgreSQL data types we’ve been learning about in the previous articles of
Buckle-up, this is a long article with a lot of SQL inside.
PostgreSQL has built-in support for JSON with a great range of processing
functions and operators, and complete indexing support. The documentation
covers all the details in the chapters entitled JSON
and JSON Functions and
The SQL standard includes a SQL/XML
which introduces the predefined data type XML together with constructors,
several routines, functions, and XML-to-SQL data type mappings to support
manipulation and storage of XML in a SQL database, as per the Wikipedia