I am writing this article in order to be a guide for beginners. This article will be an introduction to PostreSQL.
Before I get started to this article, I want to share something that PostgreSQL experts thinks its very important, “never say Postgre”. When you say Postgre, anyone may not answer your questions in the PostgreSQL forums.
We will add an “s” to the end of the Postgre and we will use it as Postgres in later parts of the article.
Postgres is a relational database management system. I would recommend reading the article “About SQL Server and Comparison with Other Relational Database Management Systems“. I compare PostgreSQL and other relational database management systems in this article.
First, I will start by describing the basic architecture of Postgres. We will examine Postgres in two processes.
Query Backend
The main processes of Postgres where queries are processed.
Authentication and Authorization are performed.
The following checks are performed during Authentication and Authorization operations.
- User Name
- IP
- DB
- Password
If the validation is successful, the postgres subprocess is generated from the postgres master process to accept the connection.
In Postgres, a query has three main processes. Parse + Plan + Execute
After the creation of the postgres process, the incoming query is parsed (checking the syntax and semantics of the query)
The planner then runs the query by finding the most appropriate method for the fastest execution of the query.
The statistics must be accurate so the planner can do the job right.
For example, let’s examine the story of a select statement
Let’s imagine that your planner has found the most appropriate plan for the Select query and the query works.
The Postgres process first checks whether the requested data is in Shared Buffers. Other parts of Ram are OS Cache, Kernel Cache (Linux for Operating System) and FS Cache.
PostgreSQL manages the Shared Buffer, but does not manage OS Cache. Operations beyond the Shared Buffer are transferred to the kernel. That is, if the requests are not in the Shared Buffer, postgres asks to OS Cache. If the data is in OS Cache, it is transmitted to Shared Buffer. If the data is not in the OS Cache, it is read from the disk.
Blocks in the Buffer are 4KB. The page on the disk is 8 KB.
Recommended Shared Buffer size is 25% larger than and 50% smaller than memory. You shouldnt set too large the Shared Buffer. Because even if the data is not in the Shared Buffer, it will go to the OS Cache before going to the disk. In some benchmark tests, it has been observed that setting Shared Buffer size too high reduces performance. In production environments, generally 8-12 GB will suffice.
For databases with intensive transactions, I recommend setting this to 8 GB and then checking the log to see if it is enough.
Clean and Dirty Buffer in PostgreSQL
There are two types of buffers in Shared Buffer.
- clean buffer
- dirty buffer
If the data has not been altered since being fetched into memory, it is called as a clean buffer.
If it has changed after being fetched into memory, it is called as a dirty buffer.
When the checkpoint operation is triggered a second time, the dirty buffer is written to the data disks.
BGWRITER Process in PostgreSQL
Apart from Checkpoint, there is another process called bgwriter that writes “dirty buffer” to the disks. bgwriter writes “dirty blocks” to the disk at the specified time interval repeatedly. Instead of writing all dirty pages like a checkpoint to the data disk, it uses the LRU (Last Recently Used) algorithm when writing the dirty buffer to the data disk.
Checkpoint can be triggered by duration or WAL file count. It is triggered by default every 20 minutes.
WAL Files
In Postgres, there are files named WAL. WAL files stores insert, update, and delete operations. It’s similar to “redo logs” in Oracle or “Transaction logs” in SQL Server.
WAL files has great importance in ensuring the consistency of data. WAL files were in the pg_wal directory under the data directory by default before 10.0. It is under the pg_xlog directory under the data directory with 10.0 by default.
The default size of WAL files is 16 MB.
You can specify how much information to write to WAL files by writing one of the following values to the wal_level parameter in the postgresql.conf under the data directory.
- Minimal
- Replica
- Logical
Minimal:
If you set this setting, the necessary information is stored in the wal files so that the postgres cluster can be recovered as an unexpected shutdown.
By not logging some bulk operations, bulk operations are accelerated.
But this wal level does not provide enough information to reconstruct the entire data with the WAL files and base backup.
If you are going to replicate, you should use the replica value. You should read the following article to install the Replication.
“How To Install PostgreSQL Replication On Centos”
Replica:
If you set this wal level, in addition to the minimal wal level, it stores the necessary information for wal archiving and the wal files you need to read from the standby server when you use replication.
Logical:
If you set this wal level, it stores the logical change sets in wal, in addition to the replica level. In this wal level, the wal size may be greater. Especially if you are using REPLICA IDENTITY FULL on many tables, and update and delete statements are heavily used in these tables, the wal size will be too large.
If the REPLICA IDENTITY FULL property is set on a table, information about the updated and deleted rows in the table is retained in the wal for this table.
Story of a Query
After giving all this information, let’s look briefly at the story of an update query.
First, the postgres process is asking if the data is in the shared buffer. If it is not in the shared buffer, it will ask to the kernel whether it is in OS Cache. If the data is not in OS Cache, it reads the data from the disk and transmits to the OS Cache and then to the Shared buffer.
Because the data has not changed since it first comes to shared buffer, it is called clean buffer. When the data is updated in memory, it becomes a dirty buffer. These updates are being written to WAL files.
Suppose that the commit has not yet occurred. When the second checkpoint is triggered after the update is finished before the Commit operation is completed, the updated data will be written to the data disk. (We will return to this point later. Remember this point when I say X point)
When the commit process is executed, information about the commit process is written to the WAL files.
What happens if the service closes in the X point unexpectedly?
At the X point, the data was written to the data files but not committed.
Postgres checks WAL files when its starting. Because the update operation is not committed at X point, rollback operation is performed. And the changes to the data disk as a result of the checkpoint operation are rolled back.
With this example, I hope you understand the interrelationships between Checkpoint, Commit, WAL Files and Database Recovery, and the working architecture of Postgres.
Utility Backend:
Even if there is no connection, there are processes running in the background. Below you can see the utility backend processes.
Logger
- Statistic collector
- Checkpointer
- Auto vacuum
In Postgres;
- Table is called as relation
- Row is called as tuple
- Column is called as attiribute
We will be familiar with these concepts over time.
If you want to go a little further with postgres, you can also read the following articles.
“What is MVCC(Multi Version Concurrency Control) On PostgreSQL“,
“What is Vacuum On PostgreSQL”