Greenplum DCA - Quick facts

Master 

1. The master is the entry point to the Greenplum Database system. It is the database process that accepts client connections and processes SQL commands that system users issue.
2. Greenplum Database end-users interact with Greenplum Database (through the master) as they would with a typical PostgreSQL database. They connect to the
database using client programs such as psql or application programming interfaces (APIs) such as JDBC or ODBC.
3. The master is where the global system catalog resides. The global system catalog is the set of system tables that contain metadata about the Greenplum Database system itself. The master does not contain any user data; data resides only on the segments.
4. The master authenticates client connections, processes incoming SQL commands,distributes workload among segments, coordinates the results returned by each
segment, and presents the final results to the client program.

Standby Master server specification is same as master.

1. Standby Master is configured as WARM Standby.
2. The standby master is kept up to date by a transaction log replication process, which runs on the standby master host and synchronizes the data between the primary and standby master hosts.
3. When the primary master fails, the log replication process stops, and the standby master can be activated in its place. Upon activation of the standby master, the replicated logs are used to reconstruct the state of the master host at the time of the last successfully committed transaction.
4. The activated standby master effectively becomes the Greenplum Database master, accepting client connections on the master port (which must be set to the same port number on the master host and the backup master host).
5. Since the master does not contain any user data, only the system catalog tables need to be synchronized between the primary and backup copies. When these tables are updated, changes are automatically copied over to the standby master to ensure synchronization with the primary master.

Quick facts about Greenplum Database Segment Servers
1. Segments are where data is stored and the majority of query processing takes place. 
2. When a user connects to the database and issues a query, processes are created on each segment to handle the work of that query.
3. User-defined tables and their indexes are distributed across the available segments in a Greenplum Database system; each segment contains a distinct portion of data. 
4.The database server processes that serve segment data run under the corresponding segment instances. Users interact with segments in a Greenplum Database system through the master.
4. In the recommended Greenplum Database hardware configuration, there is one active segment per effective CPU or CPU core. For example, if your segment hosts have two dual-core processors, you would have four primary segments per host.
5. The segments communicate with each other and with the master over the interconnect, which is the networking layer of Greenplum Database.
6. The Greenplum primary and mirror segments are configured to use different interconnect switches in order to provide redundancy in the event of a single switch failure.
7. Greenplum Database provides data redundancy by deploying mirror segments. Mirror segments allow database queries to fail over to a backup segment if the primary segment becomes unavailable.
8. A mirror segment always resides on a different host than its corresponding primary segment. 
9. A Greenplum Database system can remain operational if a segment host, network interface or interconnect switch goes down as long as all portions of data are available on the remaining active segments.
10. During database operations, only the primary segment is active.
11. Changes to a primary segment are copied over to its mirror using a file block replication process. Until a failure occurs on the primary segment, there is no live segment instance running on the mirror host -- only the replication process.
12. In the event of a segment failure, the file replication process is stopped and the mirror segment is automatically brought up as the active segment instance. All database operations then continue using the mirror. While the mirror is active, it is also logging all transactional changes made to the database. When the failed segment is ready to be brought back online, administrators initiate a recovery process to bring it back into operation.

Each segment instance have their own postgresql.conf file. Some parameters are local. each segment instance examines its own postgresql.conf file to
get the value of that parameter. 

To change a local configuration parameter across multiple segments, update the parameter in the postgresql.conf file of each targeted segment, both primary and
mirror. Use the gpconfig utility to set a parameter in all Greenplum postgresql.conf files. For example: 
$gpconfig -c gp_vmem_protect_limit -v 4096MB

Greenplum Interconnect

1. The interconnect is the networking layer of Greenplum Database. 
2. The interconnect refers to the inter-process communication between segments and the network infrastructure on which this communication relies. 
3. The Greenplum interconnect uses a standard Gigabit Ethernet switching fabric.
4. By default, the interconnect uses User Datagram Protocol (UDP) to send messages over the network. 
5. The Greenplum software performs packet verification beyond what is provided by UDP. This means the reliability is equivalent to Transmission Control
Protocol (TCP), and the performance and scalability exceeds TCP. If the interconnect used TCP, Greenplum Database would have a scalability limit of 1000 segment
instances. 
6. With UDP as the current default protocol for the interconnect, this limit is not applicable.
7. When a user connects to a database and issues a query, processes are created on each of the segments to handle the work of that query. The interconnect refers to the inter-process communication between the segments, as well as the network infrastructure on which this communication relies.
8. To maximize throughput, interconnect activity is load-balanced over two interconnect networks. 
9. To ensure redundancy, a primary segment and its corresponding mirror segment utilize different interconnect networks. With this configuration, Greenplum
Database can continue its operations in the event of a single interconnect switch failure.

ConnectEMC Event Alerts for Interconnect
Code       Description
14.12002 Interconnect Switch Operational Status.
14.12005 Operational status of Interconnect switch flash memory.
14.12006 State of Interconnect switch flash memory.
Comments