Export Hbase tables from HDP 2.6 to CDP 7.1

Export Hbase tables from HDP 2.6 to CDP 7.1

22 February 2022 0 By Eric Deleforterie

In this post I will describe my journey during a migration from Hortonworks HDP 2.6 to Cloudera CDP 7.1.

I have to export the Hbase tables from an old and less secure cluster to a more recent and secure cluster.

The application that used the Hbase tables can’t stop for a long time and have to do a dual run with the same data on the 2 clusters.

Cross-Realm Kerberos trust

Firstly you have to realize a cross-realm kerberos trust between the 2 kerberos realms, you can find some good reads on RedHat site

Example for configuring a cross-realm kerberos trust between 2 realms

  • REALM_A
  • REALM_B

KDC step

In each KDC you have to create the 2 krbtgt shared principal

  • krbtgt/REALM_A@REALM_B
  • krbtgt/REALM_B@REALM_A

It is very important that the same principal have to share the same KVNO (Key Version Number) and password otherwise it will not works

krb5.conf step

On each cluster you have to configure the following sections depending of your configuration

[realms]
 REALM_A = {
 kdc = host1.domain-A.com
 kdc = host2.domain-A.com
 }
 REALM_B = {
 kdc = host1.domain-B.com
 kdc = host2.domain-B.com
 }

[domain_realm]
 .domain-A.com = REALM_A
  domain-A.com = REALM_A
 .domain-B.com = REALM_B
  domain-B.com = REALM_B

 

Kerberos issue

So the first idea was to use the Hbase replication that could help me to have 2 Hbase clusters synchronized.

But I had some problem for implementing Hbase replication :

  • Kerberos principal on the CDP cluster have PRE_AUTH attribute
  • Kerberos principal on the HDP cluster does not have the PRE_AUTH attribute
Attributes: REQUIRES_PRE_AUTH

A principal with the PRE_AUTH attribute could connect with a service without the PRE_AUTH attribute, but the reverse is not possible.

I hit this error

KrbException: Generic error (description in e-text) (60) - NO PREAUTH
KrbException: Fail to create credential. (63) - No service creds

So this avoid to use the HDP cluster as a source for Hbase replication or export, unless modifying all the principals of the HDP Kerberos KDC.

This is simple to do :

  • stop the entire cluster
  • do a modprinc <principal> +requires_preauth
  • start the entire cluster

But this is a little bit risky and in fact I have to do that on multiple clusters as there is other clusters connecting with this one.

Communication layer protection

Ok, so I tried to use the CDP cluster to connect to the HDP cluster for pulling data as the more secured could talk to the less one.

I went further, but find a new problem.

hadoop.rpc.protection was not the same :

  • HDP have hadoop.rpc.protection=authentication
  • CDP have hadoop.rpc.protection=privacy

So I hit the error

javax.security.sasl.SaslException: No common protection layer between client and server

The client (CDP) was with a privacy protection and the server (HDP) with an authentication protection.

You can read an excellent article on the impact of securing the communication on the ebay site

In HDP Ambari you can set the hadoop.rpc.protection to a list like this

hadoop.rpc.protection=authentication,privacy

In CDP Cloudera Manager you have a button parameter and have no other choice to select beetwen the three values

  • authentication
  • integrity
  • privacy

The only solution to set a list with multiple values is to use a safety valve for hdfs-GATEWAY-BASE, this is not supported, but after a search in the source code and a test it is working well.

{
  "name" : "hdfs_client_config_safety_valve", 
  "value" : "<property><name>hadoop.rpc.protection</name><value>authentication,privacy</value></property>",
  "sensitive" : false
}

You have to do the same for hbase in hbase-GATEWAY-BASE

{
  "name" : "hbase_client_config_safety_valve",
  "value" : "<property><name>hbase.rpc.protection</name><value>authentication,privacy</value></property>",
  "sensitive" : false
}

 

Setting auth_to_local

Using hbase principal, you have to translate the remote principal to the local user, this is done by adding a rule in the auth_to_local of HDP cluster to remove the REALM_B

<property>
  <name>hadoop.security.auth_to_local</name>
  <value>
    DEFAULT
    RULE:[2:$1@$0](.*@REALM_B)s/@.*//
  </value>
</property>

This will translate the hbase/host.domain-B.com@REALM_B to hbase principal in the remote cluster depending of REALM_A

Setting nameservices

When using long commands and following things to configure it is easier to use the nameservices.

Here is the example for configuring in the CDP cluster the HDP nameservice

<property>
    <name>dfs.nameservices</name>
    <value>cdp_cluster,hdp_cluster</value>
</property>

<property>
    <name>dfs.ha.namenodes.hdp_cluster</name>
    <value>nn1,nn2</value>
</property>
<property>
    <name>dfs.client.failover.proxy.provider.hdp_cluster</name>
    <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
</property>
<property>
    <name>dfs.ha.automatic-failover.enabled.hdp_cluster</name>
    <value>true</value>
</property>
<property>
    <name>dfs.namenode.rpc-address.hdp_cluster.nn1</name>
    <value>master1.domain-A.com:8020</value>
</property>
<property>
    <name>dfs.namenode.rpc-address.hdp_cluster.nn2</name>
    <value>master2.domain-A.com:8020</value>
</property>
<property>
    <name>dfs.namenode.servicerpc-address.hdp_cluster.nn1</name>
    <value>master1.domain-A.com:8022</value>
</property>
<property>
    <name>dfs.namenode.servicerpc-address.hdp_cluster.nn2</name>
    <value>master2.domain-A.com:8022</value>
</property>
<property>
    <name>dfs.namenode.http-address.hdp_cluster.nn1</name>
    <value>master1.domain-A.com:50070</value>
</property>
<property>
    <name>dfs.namenode.http-address.hdp_cluster.nn2</name>
    <value>master2.domain-A.com:50070</value>
</property>
<property>
    <name>dfs.namenode.https-address.hdp_cluster.nn1</name>
    <value>master1.domain-A.com:50470</value>
</property>
<property>
    <name>dfs.namenode.https-address.hdp_cluster.nn2</name>
    <value>master2.domain-A.com:50470</value>
</property>

 

After the configuration you can use the nameservice directly in your commands, example

hdfs dfs -ls hdfs://hdp_cluster/tmp

 

Yarn token renewer

One of the other problem is the token renewing when using distcp or org.apache.hadoop.hbase.snapshot.ExportSnapshot, the job try to renew the token contacting the remote KDC.

For avoiding this you have to configure an exclude in the YARN safety valve for yarn-GATEWAY-BASE on the CDP cluster

<property>
    <name>mapreduce.job.hdfs-servers.token-renewal.exclude</name>
    <value>hdp_cluster</value>
</property>

 

Validation of the communication was done using hdfs dfs -ls commands and distcp

Finally the Hbase part

So we can’t use the Hbase replicaiton, but we can use the ExportSnapshot.

I wrote scripts to do the following tasks automaticaly on the HDP source cluster

  • take snapshot of all the tables
  • use snapshotInfo to have Hfile number and size of the snapshot and saving information in a csv file
  • generate a file for deleting snapshots

On the CDP destination cluster scripts to do the following

  • separate the tables csv by volumetry (Bytes, Megabytes, Gigabytes, Terabytes)
  • using org.apache.hadoop.hbase.snapshot.ExportSnapshot for importing the snapshots in parallel
  • clone the snapshots to recreate the tables

Taking a snapshot

snapshot 'namespace_name:table_name', 'snapshot_name'

SnapshotInfo

hbase org.apache.hadoop.hbase.snapshot.SnapshotInfo -stats -snapshot "snapshot_name"

This will give you :

  • general information about the snapshot
  • Number of HFiles
  • Snapshot size
  • Percentage shared with the source table

This informations are important to optimize the number of mappers during the ExportSnapshot

ExportSnapshot

The following command was used on the CDP cluster to import hbase snapshot from HDP

hbase org.apache.hadoop.hbase.snapshot.ExportSnapshot -Dsnapshot.export.skip.tmp=true -Dmapreduce.job.queuename=A_BIG_QUEUE -snapshot "snapshot_name" -copy-from hdfs://hdp_cluster/apps/hbase/data -copy-to hdfs://cdp_cluster/hbase -target "snapshot_name" -mappers nb_mappers
  • snapshot.export.skip.tmp for avoid to use temporary files that could be removed if your copy is long
  • mapreduce.job.queuename the queue name
  • mappers is the number of mappers, I usualy set the snapshot files number with a maximum of 350

Clone snapshot

clone_snapshot 'snapshot_name', 'namespace_name:table_name'

This command is very fast and not depending of the snapshot size.

Addendum

If you are using a hdfs storage policy like ONE_SSD or ALL_SSD for region servers in a RS_group, using this method keep all the block’s replicas on standard DISK policy in the archive sub-directory of hbase (this is why the clone snapshot is very quick).

So before releasing to production you have to do a major compaction of your tables using a SSD storage policy or the performances you expected will not be here and you will have a bad locality ratio.

You can identify the regions/tables impacted by doing a

hdfs fsck <full path name of the file> -files -blocks -locations

You will see the storage policy of each replica and check that your block have at least one replica on SSD if using ONE_SSD.

Other thing Hbase use sometimes a link like path name when the file is common with the snapshot with the name of the source namespace and the source table (namespace=<source namespace>, table=<source table>).

 

 

Please follow and like us: