Oracle Database Architecture

INTRODUCTION: –

An Oracle Database consists of at least one database instance and one database. The database instance handles memory and processes. The database consists of physical files called data files and can be a non-container database or a multitenant container database. An Oracle Database also uses several database system files during its operation.

INSTANCE:- A database instance is a set of memory structures that manage database files. An instance has two parts:- One part is System Global Area and another is Background Process.         

Diagram of Database Architecture:

Diagram of Database Architecture

SHARED GLOBAL AREA:- 

The SGA is a group of shared memory area that is dedicated to the oracle instance, it contains data and control information of the instance.

PROGRAM GLOBAL AREA:-  

PGA stands for Program Global Area. This is the private memory region that contains the data and control information for a server process. For every server process, a PGA will be created.

ORACLE DATABASE MEMORY STRUCTURE:-

The SGA is the memory area that contains data and control information for the instance. The SGA includes the following data structures:-

  • SHARED POOL:- Caches various constructs that can be shared among users.
  • DATA BUFFER CACHE:– Caches blocks of data retrieved from the database.
  • REDO LOG BUFFER CACHE:–  Caches redo information (used for instance recovery) until it can be written to the physical redo log files stored on the disk.
  • JAVA POOL:– Used for all session-specific Java code and data in the Java Virtual Machine (JVM).
  • STREAMS POOL:– Used by Oracle Streams to store information required by capture and apply.
  • LARGE POOL:– Optional area that provides large memory allocations for certain large processes, such as Oracle backup and recovery operations, and I/O server processes.

BACKGROUND PROCESS:-

  • DATABASE WRITER PROCESS:- Writes dirty buffer in the database buffer cache to the disk.

A dirty buffer is a modified buffer in the database buffer cache that no longer matches the corresponding block on the disk.

CONDITION

  • When a server process cannot find a reusable buffer after scanning the threshold number of buffers, then DBWN writes the dirty buffer asynchronously.
  • Dbwn periodically writes the buffer to the advance checkpoint.
  • LOG WRITER PROCESS:- LGWR writes the redo entry from the redo log buffer cache to online redo log file.

CONDITION

  • When a user process commits a transaction.
  • The redo-log buffer cache one-third is full.
  • Before DBWN writes the dirty buffer to the disk.
  • PROCESS MONITORING:-  PMON takes care all bg process. When the user disconnects some issue then PMON clears all resources.
  • Performs process recovery when a user process fails.
  • Clean up the database buffer cache.
  • Monitor sessions for idle session timeout.
  • Dynamically registers database services with the listener.
  • SYSTEM MONITORING:- SMON performs recovery at instance startup and cleans unused recovery segments. It checks regularly to see whether the process is needed.
  • Performs recovery with instance startup.
  • Clean up unused temporary segments.
  • CHECKPOINT:- Checkpoint mainly controls the control file information and data-file header. When a checkpoint occurs, the Oracle database updates the header of all data files to record the details of the checkpoint. The CKPT process does not write blocks to disk.

CHECKPOINT MAINLY TWO TYPES 🡪

  • FULL CHECKPOINT 🡪 Normal Checkpoint is when all the data is flushed to Disk, then the control file, Data file and Redo log is updated with the current SCN.
  • INCREMENTAL CHECKPOINT 🡪 Incremental checkpoint is when the oldest data is flushed to disk, in that case, the Control file and redo log is updated with SCN (System Generated Number) but the header of the Data file will remain unchanged.
  • SCN Number 🡪 SCN stands for system change number. It is the unique incremental number of databases. This number is incremented by every 3sec.
  • ARCn PROCESS 🡪 ARCn is an Oracle background process responsible for copying the entirely filled online redo log file to the archive log. Once these files have been copied, they can be overwritten.
  • PRACTICAL PROCESS ARCn
Archive log list
shu immediate
Archive log

QUERY PROCESSING WORKFLOW

QUERY PROCESSING WORKFLOW

The Outline of the above process is:-

  1. A client-side cursor is opened.
  2. The client’s server process searches for a shareable cursor on the server side, within the SGA’s Shared Pool memory structure.
  3. If the search returns a reusable cursor, it’s marked as a Hit and the next step would be to execute the cursor.
  4. If the cursor is not found, it’s marked as a Miss and now, a new cursor area is allocated and the parsing for this new cursor would commence.
  5. Once the parsing of the cursor is over, the bind variables (if used) are replaced with the actual values supplied for them.
  6. The query may be chosen to use parallelism, i.e. use more than one process to execute it
  7. One the execution of the query is complete, which results in getting the required data for the given statement, the results are fetched by the server process.
  8. Once the data is fetched, the cursor is closed.

PROCESS OF SQL STATEMENT

When the new SQL statement is parsed, Oracle Database allocates memory from the shared pool. First, create a library cache. In this cache first of all server process will check the syntax, if the syntax is not an error then go to the next step. The next step is called semantic. It is done by a data dictionary cache.

It is a two process. First, check the object, the object is present (Table Name) then it is successful and go to the next process privilege. It parts to check the user’s permission. The permission is allowed then successful check. The entire whole work is called PARSING. Parsing is done in a new fresh instance then it is called Hard Parsing, if it is done in an old instance then it is called Soft Parsing. After the whole work is done then create a #code and an optimizer. [ n no of execution plan ]. It is executed in the library cache.

Leave a Reply

SOAIS - Worksoft Newsletter

To view on your browser, click here
Facebook Twitter LinkedIn
Dear Default Value,
 

Welcome to SOAIS Newsletter of September 2021!

Continuous Testing with Remote Execution
 
The speed of innovation continues to increase, driving rapid and relentless change for today’s ever-evolving IT landscapes, creating greater risk as IT and business teams scramble to ensure timely delivery. How can your organization keep pace? Test more, worry less. With Worksoft’s Connective Automation Platform, you can easily build and maintain automated tests, accelerating testing time without losing scope or volume. You can schedule and execute remote, continuous tests to intercept defects sooner and prioritize remediation - without sacrificing your nights and weekends. Explore how continuous test automation and remote execution can empower your organization.

Click here to connect with us to get more information on our services.
 

Skip Costly Rework with Dynamic Change Resiliency​

Change resiliency is imperative in ever-evolving IT environments. Our patented object action framework streamlines change management by assigning object definitions to your shared assets. The same object may be used in a thousand automation steps, but it can be easily updated by making one simple change to the model definition. The change automatically propagates to every single instance where that object may have been used without a single line of code or manual human involvement. For more change readiness you can also engage our Impact Analysis for SAP to predict how changes in SAP transports will affect your business processes. 

Please click here to watch the video to get a gist.
 

SOAIS Blog – Nuts and bolts of Certify Database Maintenance​

One of the key thing, which is often missed by the organizations, who have invested in using Worksoft Certify for automating their Business Process Validation initiatives, is implementing a Database Maintenance Plan. While the business and the test automation consultants get excited about the shiny new thing that they have got and start building the regression suite; planning and executing a database maintenance plan for most of the customers gets pushed down the priority list. However, since all the test assets in Certify are stored in a Database, a robust database maintenance plan is very important to maintain smooth operation of Certify with acceptable performance criteria. The customers usually start facing issues once they have built significant number of Certify processes which they have started executing on regular basis. Such executions add a lot of data to the tables storing results data and increase the overall size of the Certify database.

Please click here to read the complete blog.
 

Worksoft Blog – Process Intelligence: A Multi-Dimensional Approach

The ability to extract process knowledge has become easier through the years. Technology has evolved to the point where we can deploy capabilities that connect at multiple levels to extract different types of process insight. In the past, organizations were forced to spend enormous energy extracting data manually from different applications and databases. Then, they would have to use things like spreadsheets to transform the data and convert it into meaningful information. 

Please click here and read the complete blog.
 
India
Unit 9, Level 5, Navigator, ITPL,
Bangalore - 560 066.
Phone: +91 80 40071234
US
Suite 101, 1979, N Mill St,
Naperville, IL 60563
Phone 1-800-262-2427
Please click here to Unsubscribe / Unsubscribe Preferences

Leave us your info