Checking stats of the Greenplum database

posted Apr 28, 2017, 4:39 PM by Sachchida Ojha
How many SQL queries running in my Greenplum database
select * from database_now;
select ctime,queries_running,queries_queued,queries_total from database_now;
select ctime,queries_running,queries_queued,queries_total from database_now where <search Condition>;
select * from database_history;
select ctime,queries_running,queries_queued,queries_total from database_history;
select ctime,queries_running,queries_queued,queries_total from database_history where <search Condition>;

How to check System Health of my Greenplum database
select * from health_now;
select ctime,hostname,symptom_code,detailed_symptom_code,description,queries_queued,snmp_oid,status,message from health_now;
select ctime,hostname,symptom_code,detailed_symptom_code,description,queries_queued,snmp_oid,status,message from health_now where <search Condition>;
select * from health_history;
select ctime,hostname,symptom_code,detailed_symptom_code,description,queries_queued,snmp_oid,status,message from health_history;
select ctime,hostname,symptom_code,detailed_symptom_code,description,queries_queued,snmp_oid,status,message from health_history where <search Condition>;

How to check system metrics (memory, load, swap, cpu, disk IO) of my Greenplum database
select * from system_now;
select ctime,hostname,mem_total,mem_used,mem_actual_used,mem_actual_free,swap_total,swap_used,swap_page_in,swap_page_out, cpu_user, cpu_sys, cpu_idle, load0, load1, load2, quantum,disk_ro_rate, disk_wo_rate, disk_rb_rate, disk_wb_rate, net_rp_rate, net_wp_rate, net_rb_rate, net_wb_rate
from system_now;

select ctime,hostname,mem_total,mem_used,mem_actual_used,mem_actual_free,swap_total,swap_used,swap_page_in,swap_page_out, cpu_user, cpu_sys, cpu_idle, load0, load1, load2, quantum,disk_ro_rate, disk_wo_rate, disk_rb_rate, disk_wb_rate, net_rp_rate, net_wp_rate, net_rb_rate, net_wb_rate
from system_now
where <search Condition>;

select * from system_history;
select ctime,hostname,mem_total,mem_used,mem_actual_used,mem_actual_free,swap_total,swap_used,swap_page_in,swap_page_out, cpu_user, cpu_sys, cpu_idle, load0, load1, load2, quantum,disk_ro_rate, disk_wo_rate, disk_rb_rate, disk_wb_rate, net_rp_rate, net_wp_rate, net_rb_rate, net_wb_rate
from system_history;

select ctime,hostname,mem_total,mem_used,mem_actual_used,mem_actual_free,swap_total,swap_used,swap_page_in,swap_page_out, cpu_user, cpu_sys, cpu_idle, load0, load1, load2, quantum,disk_ro_rate, disk_wo_rate, disk_rb_rate, disk_wb_rate, net_rp_rate, net_wp_rate, net_rb_rate, net_wb_rate
from system_history
where <search Condition>;

Viewing Historical Query Metrics using SQL
Historical tables in the monitor database store all query and iterator data collected by the monitor. You can analyze this data to identify trends in the performance of Greenplum Database over time.

Calculating the average, minimum and maximum values for system-wide query statistics aggregated over time can yield a useful view of database utilization. The following information provides an example SQL view for viewing query statistics in this way.
In this example view named database_metrics_1hour, rows of statistics from the table database_history are averaged over hour-long intervals:

DROP VIEW if exists database_metrics_1hour;

CREATE VIEW database_metrics_1hour AS (
SELECT date_trunc('hour', ctime) AS sample_time, avg(queries_total) AS queries_total,
min(queries_total) AS queries_total_min,
max(queries_total) AS queries_total_max, avg(queries_running) AS queries_running, min(queries_running) AS queries_running_min, max(queries_running) AS queries_running_max, avg(queries_queued) AS queries_queued,
min(queries_queued) AS queries_queued_min, max(queries_queued) AS queries_queued_max
FROM database_history
GROUP BY sample_time );
To change the interval for averaging results, adjust the value specified for date_trunc. Valid values are microseconds, milliseconds, second, minute, hour, day, week, month, quarter, year, decade, century and millennium.

Viewing Historical System Metrics using SQL
Historical tables in the monitor database store all system data collected by the monitor. You can analyze this data to identify trends in the performance of the Greenplum Database system over time.
With a default quantum setting of 15 seconds, the monitor collects very large amounts of historical system data. To view meaningful groupings of system data records, you can view the data by selected time intervals with a SQL view (as described below).
The following provides an example SQL view for viewing aggregated system metrics. In this example view named system_metrics_aggregate_1min, rows of system metrics from system_history are averaged over one-minute intervals and aggregated across all segments in the array:
DROP VIEW IF EXISTS system_metrics_aggregate_1min;
CREATE VIEW system_metrics_aggregate_1min AS (
SELECT date_trunc('minute', ctime) AS sample_time, hostname, avg(mem_total) AS mem_total,
avg(mem_used) AS mem_used,
avg(mem_actual_used) AS mem_actual_used,
avg(mem_actual_free) AS mem_actual_free,
avg(swap_total) AS swap_total, avg(swap_used) AS swap_used, avg(swap_page_in) AS swap_page_in,
avg(swap_page_out) AS swap_page_out,
avg(cpu_user) AS cpu_user, avg(cpu_sys) AS cpu_sys, avg(cpu_idle) AS cpu_idle,
avg(load0) AS load0, avg(load1) AS load1,
avg(load2) AS load2,
avg(disk_ro_rate) AS disk_ro_rate,
avg(disk_wo_rate) AS disk_wo_rate,
avg(disk_rb_rate) AS disk_rb_rate,
avg(disk_wb_rate) AS disk_wb_rate,
avg(net_rp_rate) AS net_rp_rate,
avg(net_wp_rate) AS net_wp_rate,
avg(net_rb_rate) AS net_rb_rate,
avg(net_wb_rate) AS net_wb_rate
FROM system_history
GROUP BY hostname, sample_time
);

To change the interval for averaging results, adjust the value specified for date_trunc. Valid values are microseconds, milliseconds, second, minute, hour, day, week, month, quarter, year, decade, century and millennium.


Quick reference of Perfmon database tables. 
•The database_* tables store query workload information for a Greenplum Database instance.

•The emcconnect_history table displays information about ConnectEMC events and alerts. ConnectEMC events are triggered based on a hardware failure, a fix to a failed hardware component, or a Greenplum Database startup. Once an ConnectEMC event is triggered, an alert is sent to EMC Support.

•The filerep_* tables store health and status metrics for the file replication process. This process is how high-availability/mirroring is achieved in Greenplum Database instance. Statistics are maintained for each primary-mirror pair.

•The health_* tables store system health metrics for the EMC Data Computing Appliance.

•The interface_stats_* tables store statistical metrics for each active interface of a Greenplum Database instance. Note: These tables are in place for future use and are not currently populated.

•The iterators_* tables store information about query plan iterators and their metrics. A query iterator refers to a node or operation in a query plan.

•The queries_* tables store high-level query status information.

•The segment_* tables store memory allocation statistics for the Greenplum Database segment instances.

•The socket_stats_* tables store statistical metrics about socket usage for a Greenplum Database instance. Note: These tables are in place for future use and are not currently populated.

•The system_* tables store system utilization metrics.

•The tcp_stats_* tables store statistical metrics about TCP communications for a Greenplum Database instance. Note: These tables are in place for future use and are not currently populated.

•The udp_stats_* tables store statistical metrics about UDP comm unications for a Greenplum Database instance. Note: These tables are in place for future use and are not currently populated.
The Command Center database also contains the following views:

•The dynamic_memory_info view shows an aggregate of all the segments per host and the amount of dynamic memory used per host.

•The iterators_*_rollup set of views summarize the query iterator metrics across all segments in the system.

•The memory_info view shows per-host memory information from the system_history and segment_history tables.
Comments