Here is an unusual data warehouse (columnar, not row based) server; I have several very large fact tables, some of them are over 800GB size.
Total DB size is way over 25TB, and we run very bizarre over 20 table joins, outer join/self join.
Mainly for financial risk reporting; the reporting tools are SAP business object, IBM Cognos and Micro Strategy dashboard, lots of cubes and ad-hoc queries. Most queries will do hash/merge joins, group by, order by; over 95% i/o will be at Temp DB devices.
For reporting server similar to this, in general, to support multiple running SQL in the system; the i/o from Temp database is became major bottleneck.
You can find good references about Cube from Data Warehouses Explained by Dremio.
DW graphical query plan output
1 2 3 |
$ egrep 'Act' CCAR_dqi_20180917-140154_2496060.html <TR><TD><B><FONT SIZE=-2>Act. Temp Space Used (Mb)</FONT></B></TD><TD><FONT SIZE=-2>231986.2</FONT></TD></TR> |
If you have 5 concurrent reports similar to that, you will need close to 1000G temp database space.
Setup 3 EMC flash drive
From Red Hat Linux box, we have 3 EMC flash internal cards as fioa/fiob/fioc.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 |
$ dmesg | grep fio | grep 'block device' fioinf HP 2410GB MLC PCIe ioDrive2 Duo for ProLiant Servers 0000:63:00.0: Creating block device fioa: major: 252 minor: 0 sector size: 512... fioinf HP 2410GB MLC PCIe ioDrive2 Duo for ProLiant Servers 0000:64:00.0: Creating block device fiob: major: 252 minor: 16 sector size: 512... fioinf HP 1205GB MLC PCIe ioDrive2 for ProLiant Servers 0000:54:00.0: Creating block device fioc: major: 252 minor: 32 sector size: 512... $ ls -ltr /dev/fio* brw-rw---- 1 root disk 252, 16 Aug 19 11:44 /dev/fiob brw-rw---- 1 root disk 252, 32 Aug 19 11:44 /dev/fioc brw-rw---- 1 root disk 252, 0 Aug 19 11:44 /dev/fioa $ From these 3 EMC drives, we set up 5 raw volumes. $ ls -1 /dev/raw/raw100* /dev/raw/raw1001 /dev/raw/raw1002 /dev/raw/raw1003 /dev/raw/raw1004 /dev/raw/raw1005 $ $ ls -ltr /dev/ssd* total 0 lrwxrwxrwx 1 root root 7 Aug 19 11:44 ssd-vol2 -> ../dm-5 lrwxrwxrwx 1 root root 7 Aug 19 11:44 ssd-vol1 -> ../dm-4 lrwxrwxrwx 1 root root 7 Aug 19 11:44 ssd-vol4 -> ../dm-7 lrwxrwxrwx 1 root root 7 Aug 19 11:44 ssd-vol3 -> ../dm-6 lrwxrwxrwx 1 root root 7 Aug 19 11:44 ssd-vol5 -> ../dm-8 $ # pvscan -- UNIX SA privilege needed for this command PV /dev/md0p1 VG ssd-vg lvm2 [2.00 TiB / 170.24 GiB free] PV /dev/fioc VG ssd-vg lvm2 [1.10 TiB / 0 free] |
I/O stat monitoring
Finally, we can check i/o rates from 3 flash devices.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 |
$ iostat -nx 5 | egrep 'await|fio' Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await r_await w_await svctm %util fioa 0.00 0.00 5.60 32.97 580.77 3374.98 102.55 1.18 31.46 9.99 35.12 0.22 0.84 fiob 0.00 0.00 5.60 32.40 580.46 3375.32 104.10 1.16 32.36 10.18 36.19 0.22 0.83 fioc 0.00 0.00 6.36 37.48 774.91 4499.65 120.32 0.36 8.34 1.59 9.49 0.13 0.55 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await r_await w_await svctm %util fioa 0.00 0.00 0.00 1.65 0.00 273.34 165.78 0.00 0.28 0.00 0.28 0.00 0.00 fiob 0.00 0.00 0.00 1.37 0.00 265.71 193.86 0.00 0.29 0.00 0.29 0.00 0.00 fioc 0.00 0.00 0.00 1.41 0.00 383.95 272.23 0.00 0.25 0.00 0.25 0.00 0.00 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await r_await w_await svctm %util fioa 0.00 0.00 0.00 43.11 0.00 5393.08 125.10 0.00 0.20 0.00 0.20 0.00 0.00 fiob 0.00 0.00 0.00 43.29 0.00 5413.54 125.07 0.00 0.21 0.00 0.21 0.00 0.00 fioc 0.00 0.00 0.00 46.20 0.00 7106.33 153.83 0.00 0.19 0.00 0.19 0.00 0.00 |