Saturday , December 21 2024

Fast Start Failover -SINGLE NODE

Today I will give you information about Fast Start Failover -SINGLE NODE.

1. We check the broker’s configuration to see if everything is normal.

2. We check whether Flashback is enabled in Primary and Standby Databases.

3. We control the duration that Flashback Logs will be stored.

4. We check whether there is enough space in the location where Flashback Logs will be stored.

5. Before activating Flashback, we stop Log Apply operations on Standby Databases.

If I did it with SQL, I would use the following commands.

6. We enable Flashback feature in Primary and Standby Databases.

The reason we put this feature into use is that the databases other than the Primary Database, which will be Disabled after Failover, and the Fast-Start Failover Target Standby database, can successfully perform the necessary roles again with Flashback Logs.

Otherwise I would have to create these databases again.

I could also do this with SQL from SQLPLUS, but after the Broker configuration, I will need to manage the Data Guard Environment from the Broker, so I do it from the Broker for the sake of hand.

7. We check whether the flashback feature is enabled in the databases.

8. It is checked whether Flashback Logs have started to occur.

9. Current Flashback Size is also checked.

That’s equivalent to 100MB. This is allocate to you when it is created in Default. It increases according to Flaschback_Retention time.

10. We start Log Apply operations on Standby Databases.

11. We check if the Log Apply processes have started.

12. A new table is created under the TEST user to check whether the Log Apply processes have started.

13. Finally, we check the RedoRoutes parameters to see if the logs will go according to the desired architecture.

14. Now we start the parameter adjustments for Fast-Start Failover operations.

a. Target Standby Database is determined for Fast-start Failover.

b. After the role change, Fast-start Failover Target Standby Database is determined.

c. It is determined how often the Observer will establish a connection with the Primary Database.

I don’t think so much parameter changes are necessary for now and I don’t make any other changes.

15. Before enabling Fast-start Failover, we check the status of the parameters one last time.

Observer is not START yet, so it cannot be seen on which HOST it is running. Also, Target parameter will be seen after Fast-start Failover ENABLE.

16. We ENABLE the broker.

The reason for this error is that although we use Far SYNC in the Data Guard Environment, the current Protection Mode is Max Performance. Protection Mode must be Maximum Availability.

17. We make Protection Mode Maximum Availability.

18. The broker is requested to be ENABLEd again.

19. We are questioning the status of Fast-start Failover.

The reason why it says “not in use” in the Lag Limit parameter is that this parameter is used while in Maximum Performance mode.

20. We query the status of the broker configuration.

The reason for these warnings is that the Observer has not been started yet.

21. The Observer is started.

Since the Observer is a foreground process, it constantly occupies the DGMGRL prompt it is running, and we can see all the processes it captures and the jobs it does from this prompt.

For this reason, it is useful to connect from a different computer. In addition, since it will follow the Primary and Standby Databases, it should not be run in these databases. That’s why I run it in Primary Far_SYNC.

22. We are questioning the status of Fast-start Failover again.

I ran Observer from a completely separate instance. This is not mandatory. As I mentioned before, it can also be run from Far SYNC instances.

23. The broker configuration is also queried again.

(*) Indicates Fast-start Failover Target Standby Database.

24. Fast-start Failover is triggered by closing Primary Database with Shutdown Abort.

25. Follow the steps during the Failover process from the computer where the Observer is started.

26. Failover operation completed successfully. We pass to the controls.

a. The broker configuration is queried to see if there is an error.

The reason for this warning is that the Original Primary Database is disabled and is not synchronized with the New Primary.

The reason for the error is that the Original Primary needs the Reinstate operation.

b. The statuses of the databases are queried.

c. Databases’ open_mode, roles and protection modes are queried.

d. It is questioned whether the Log Switch operation has been carried out successfully. For this, first RESETLOGS_TIME is learned. This is because databases eat OPEN RESETLOGS and the log sequence numbers change.

e. Existing log sequence numbers are queried.

f. Log Switch operation is performed and log sequence numbers are checked.

The reason for the difference in 2 sequences is that there is 1 more log switch operation until I do my operations.

g. A table belonging to the TEST user is DROPed to see if the DDL and DML operations are running smoothly.

i. Existing tables are queried.

ii. A table is DROP.

27. Assuming that the problem in the original Primary database has been resolved, let’s re-enable it and ask it to assume the role of Physical Standby. For this, the database is mounted.

28. The logs in the Observer are followed.

29. The status of the original Primary database is queried.

30. The Recovery Modes of the databases are queried.

31. It is questioned whether there is an error in the broker configuration.

32. A table belonging to the TEST user is DROPed to see if the DDL and DML operations are running smoothly.

a. We are querying the existing tables.

b. A table is DROP.

Loading

About Onur ARDAHANLI

Leave a Reply

Your email address will not be published. Required fields are marked *