Oracle Database 19c on Cisco X210c M6 – WOW!
Please welcome back – guest blogger, Tushar Patel, Principal Engineer, Cisco
Recently I published a blog that reviewed the technology advances that Cisco Systems put into the Cisco UCS X-Series Modular System making it a true hybrid-cloud platform complete with cloud-based management by Cisco Intersight and a range of compute nodes that are “like” rack optimized servers that slide vertically into the X9508 chassis. I say “like” in that these nodes contain up to six NVMe drives that can be used to house applications and data (two HW RAID1 drives separately house the operating system) thus providing the option of hosting a single instance Oracle database. I closed that blog pointing out that baseline testing using FIO (industry workload application) to generate the maximum number of IOPS (Input Output Per Second) that could be supported at 37million IOPS using 4K data blocks.
Oracle database administrators (DBA) may be thinking, that was a good start but what about true Oracle workloads? I agree, let’s find out.
We used Oracle Silly Little Oracle Benchmark (SLOB) and Oracle SwingBench to test Cisco X210c M6 performance on a single compute node. These tests use 8K data blocks as is typical with Oracle databases. The extensive details are in this whitepaper or read the highlights below.
SLOB Test Results
The Silly Little Oracle Benchmark (SLOB) is a toolkit for testing I/O through an Oracle database. SLOB is very effective in testing the I/O subsystem with genuine Oracle SGA-buffered physical I/O. SLOB supports testing physical random single-block reads (db file sequential read) and random single block writes (DBWR flushing capability). SLOB issues single block reads for the read workload that are generally 8K (as the database block size was 8K).
The User Scalability test was performed with 64, 128, 192, 256, 384 and 512 users on single instance Oracle Database node by varying read/write ratio as explained below:
100% read (0% update)
90% read (10% update)
70% read (30% update)
50% read (50% update)
As you can see the scalability is excellent as most Oracle customers run an 80-20 or 70-30 read/write ratio in their workloads. The system latency ranged from 0.11 milli-seconds (ms) for 100% reads to 0.59 ms for a read/write of 50-50.
Key takeaways are:
• Results are approximately half of the number of IOPS compared to FIO test referenced above using 4k data blocks, but SLOB uses 8k data blocks. Thus, similar IOPS performance.
• Continued near linear scalability from 64 thru 512 users with low latency
SLOB offers a more realistic test of the I/O subsystem compared to FIO. SOLB make actual transaction requests which process data and then modify the database as required. Think of this as more aligned to a TPC-C (Transaction Processing Performance Council Benchmark C) OLTP (online transaction processing) benchmark workload where many small transactions randomly kit the database. An example is an airline reservation system.
Oracle SwingBench
Swingbench is a simple to use, free, Java-based tool to generate various type of database workloads and perform stress testing using different benchmarks in Oracle database environments. In this solution, we have used Swingbench tool for running Swingbench Order Entry (SOE) benchmark for representing OLTP type of workload and captured the overall performance of this reference architecture.
The Order Entry benchmark is based on SOE schema and is TPC-C like by types of transactions. The workload uses a very balanced read/write ratio around 60/40 and can be designed to run continuously and test the performance of a typical Order Entry workload against a small set of tables, producing contention for database resources.
For this test we created a 3TB database and set it up as a pluggable database within a container database infrastructure. Here are the testing results:
Wow, these are excellent results and a good indication that the Cisco UCS X210c M6 would be an excellent server to host single instance Oracle database 19c. The key takeaways are:
• Over 2.8 million transactions per minute
• Clearly there is ample headroom to take on more transactions as need dictate
• IOPS scale as expected and with typical read/write ratios used
• Viewing the AWR report from the database indicates no wait event (congestion) detected
Proven Performance Should Disk Failure Occur
Beyond raw performance, the issue that causes DBAs to lose sleep is what is the impact to this performance when a disk drive fails?
To test this issue, we have a working database running across five drives on the Cisco X210c compute node. We then pull out a drive and force the system to remove the drive from Oracle ASM. We then reinsert this drive and have ASM add this “new” drive back into the ASM for the database. By reviewing the chart below:
• Performance only dropped by about 10%when drive failure occurred
• ASM took 10 minutes to remove the drive from the disk group
• Once the new drive inserted, ASM required only 25-30 minutes to rebalance the workload and bring database performance to the prior level
Oracle ASM provides the ability to automatically rebalance the database when a drive failure occurs. Therefore, the impact of a drive failure is not as impactful as had been the case years ago and should a drive fail on the Cisco X210c M6 this test shows a fairly low impact that allows for work to continue to process.
Now I must caution that testing your own database set up could alter the results for any number of reasons such as read/write ratio etc. In summary, however, if you are looking to update your server infrastructure to host 1-8 single instances of Oracle (one per compute node) you should strongly consider Cisco UCS X-Series. Thanks for reading.
Share: