{"id":229,"date":"2022-02-22T16:51:34","date_gmt":"2022-02-22T15:51:34","guid":{"rendered":"https:\/\/deleforterie.com\/wordpress\/?p=229"},"modified":"2023-01-18T08:53:10","modified_gmt":"2023-01-18T07:53:10","slug":"export-hbase-tables-from-hdp-2-6-to-cdp-7-1","status":"publish","type":"post","link":"https:\/\/deleforterie.com\/wordpress\/index.php\/2022\/02\/22\/export-hbase-tables-from-hdp-2-6-to-cdp-7-1\/","title":{"rendered":"Export Hbase tables from HDP 2.6 to CDP 7.1"},"content":{"rendered":"\n<p>In this post I will describe my journey during a migration from Hortonworks HDP 2.6 to Cloudera CDP 7.1.<\/p>\n<p>I have to export the Hbase tables from an old and less secure cluster to a more recent and secure cluster.<\/p>\n<p>The application that used the Hbase tables can&#8217;t stop for a long time and have to do a dual run with the same data on the 2 clusters.<\/p>\n<p><!--more--><\/p>\n<h2>Cross-Realm Kerberos trust<\/h2>\n<p>Firstly you have to realize a cross-realm kerberos trust between the 2 kerberos realms, you can find some good reads on <a href=\"https:\/\/access.redhat.com\/documentation\/en-us\/red_hat_enterprise_linux\/7\/html\/system-level_authentication_guide\/using_trusts\">RedHat site<\/a><\/p>\n<p>Example for configuring a cross-realm kerberos trust between 2 realms<\/p>\n<ul>\n<li>REALM_A<\/li>\n<li>REALM_B<\/li>\n<\/ul>\n<h3>KDC step<\/h3>\n<p>In each KDC you have to create the 2 krbtgt shared principal<\/p>\n<ul>\n<li>krbtgt\/REALM_A@REALM_B<\/li>\n<li>krbtgt\/REALM_B@REALM_A<\/li>\n<\/ul>\n<p>It is very important that the same principal have to share the same KVNO (Key Version Number) and password otherwise it will not works<\/p>\n<h3>krb5.conf step<\/h3>\n<p>On each cluster you have to configure the following sections depending of your configuration<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"Ini\/Conf Syntax\">[realms]\n REALM_A = {\n kdc = host1.domain-A.com\n kdc = host2.domain-A.com\n }\n REALM_B = {\n kdc = host1.domain-B.com\n kdc = host2.domain-B.com\n }\n\n[domain_realm]\n .domain-A.com = REALM_A\n  domain-A.com = REALM_A\n .domain-B.com = REALM_B\n  domain-B.com = REALM_B<\/pre>\n<p>\u00a0<\/p>\n<h2>Kerberos issue<\/h2>\n<p>So the first idea was to use the Hbase replication that could help me to have 2 Hbase clusters synchronized.<\/p>\n<p>But I had some problem for implementing Hbase replication :<\/p>\n<ul>\n<li>Kerberos principal on the CDP cluster have PRE_AUTH attribute<\/li>\n<li>Kerberos principal on the HDP cluster does not have the PRE_AUTH attribute<\/li>\n<\/ul>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"Generic Highlighting\">Attributes: REQUIRES_PRE_AUTH<\/pre>\n<p>A principal with the PRE_AUTH attribute could connect with a service without the PRE_AUTH attribute, but the reverse is not possible.<\/p>\n<p>I hit this error<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"Generic Highlighting\">KrbException: Generic error (description in e-text) (60) - NO PREAUTH\nKrbException: Fail to create credential. (63) - No service creds\n<\/pre>\n<p>So this avoid to use the HDP cluster as a source for Hbase replication or export, unless modifying all the principals of the HDP Kerberos KDC.<\/p>\n<p>This is simple to do :<\/p>\n<ul>\n<li>stop the entire cluster<\/li>\n<li>do a modprinc &lt;principal&gt; +requires_preauth<\/li>\n<li>start the entire cluster<\/li>\n<\/ul>\n<p>But this is a little bit risky and in fact I have to do that on multiple clusters as there is other clusters connecting with this one.<\/p>\n<h2>Communication layer protection<\/h2>\n<p>Ok, so I tried to use the CDP cluster to connect to the HDP cluster for pulling data as the more secured could talk to the less one.<\/p>\n<p>I went further, but find a new problem.<\/p>\n<p><em><strong>hadoop.rpc.protection<\/strong><\/em> was not the same :<\/p>\n<ul>\n<li>HDP have <em><strong>hadoop.rpc.protection<\/strong><\/em>=authentication<\/li>\n<li>CDP have <em><strong>hadoop.rpc.protection<\/strong><\/em>=privacy<\/li>\n<\/ul>\n<p>So I hit the error<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"Generic Highlighting\">javax.security.sasl.SaslException: No common protection layer between client and server<\/pre>\n<p>The client (CDP) was with a privacy protection and the server (HDP) with an authentication protection.<\/p>\n<p>You can read an excellent article on the impact of securing the communication on the <a href=\"https:\/\/tech.ebayinc.com\/engineering\/secure-communication-in-hadoop-without-hurting-performance\/\">ebay site<\/a><\/p>\n<p>In HDP Ambari you can set the hadoop.rpc.protection to a list like this<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"Generic Highlighting\">hadoop.rpc.protection=authentication,privacy<\/pre>\n<p>In CDP Cloudera Manager you have a button parameter and have no other choice to select beetwen the three values<\/p>\n<ul>\n<li>authentication<\/li>\n<li>integrity<\/li>\n<li>privacy<\/li>\n<\/ul>\n<p>The only solution to set a list with multiple values is to use a safety valve for <strong>hdfs-GATEWAY-BASE<\/strong>, this is not supported, but after a search in the source code and a test it is working well.<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"JSON\">{\n  \"name\" : \"hdfs_client_config_safety_valve\", \n  \"value\" : \"&lt;property&gt;&lt;name&gt;hadoop.rpc.protection&lt;\/name&gt;&lt;value&gt;authentication,privacy&lt;\/value&gt;&lt;\/property&gt;\",\n  \"sensitive\" : false\n}<\/pre>\n<p>You have to do the same for hbase in <strong>hbase-GATEWAY-BASE<\/strong><\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"JSON\">{\n  \"name\" : \"hbase_client_config_safety_valve\",\n  \"value\" : \"&lt;property&gt;&lt;name&gt;hbase.rpc.protection&lt;\/name&gt;&lt;value&gt;authentication,privacy&lt;\/value&gt;&lt;\/property&gt;\",\n  \"sensitive\" : false\n}<\/pre>\n<p>\u00a0<\/p>\n<h2>Setting auth_to_local<\/h2>\n<p>Using hbase principal, you have to translate the remote principal to the local user, this is done by adding a rule in the auth_to_local of HDP cluster to remove the REALM_B<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"XML\">&lt;property&gt;\n  &lt;name&gt;hadoop.security.auth_to_local&lt;\/name&gt;\n  &lt;value&gt;\n    DEFAULT\n    RULE:[2:$1@$0](.*@REALM_B)s\/@.*\/\/\n  &lt;\/value&gt;\n&lt;\/property&gt;\n\n<\/pre>\n<p>This will translate the <strong>hbase\/host.domain-B.com@REALM_B<\/strong> to <strong>hbase<\/strong> principal in the remote cluster depending of REALM_A<\/p>\n<h2>Setting nameservices<\/h2>\n<p>When using long commands and following things to configure it is easier to use the nameservices.<\/p>\n<p>Here is the example for configuring in the CDP cluster the HDP nameservice<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"XML\">&lt;property&gt;\n    &lt;name&gt;dfs.nameservices&lt;\/name&gt;\n    &lt;value&gt;cdp_cluster,hdp_cluster&lt;\/value&gt;\n&lt;\/property&gt;\n\n&lt;property&gt;\n    &lt;name&gt;dfs.ha.namenodes.hdp_cluster&lt;\/name&gt;\n    &lt;value&gt;nn1,nn2&lt;\/value&gt;\n&lt;\/property&gt;\n&lt;property&gt;\n    &lt;name&gt;dfs.client.failover.proxy.provider.hdp_cluster&lt;\/name&gt;\n    &lt;value&gt;org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider&lt;\/value&gt;\n&lt;\/property&gt;\n&lt;property&gt;\n    &lt;name&gt;dfs.ha.automatic-failover.enabled.hdp_cluster&lt;\/name&gt;\n    &lt;value&gt;true&lt;\/value&gt;\n&lt;\/property&gt;\n&lt;property&gt;\n    &lt;name&gt;dfs.namenode.rpc-address.hdp_cluster.nn1&lt;\/name&gt;\n    &lt;value&gt;master1.domain-A.com:8020&lt;\/value&gt;\n&lt;\/property&gt;\n&lt;property&gt;\n    &lt;name&gt;dfs.namenode.rpc-address.hdp_cluster.nn2&lt;\/name&gt;\n    &lt;value&gt;master2.domain-A.com:8020&lt;\/value&gt;\n&lt;\/property&gt;\n&lt;property&gt;\n    &lt;name&gt;dfs.namenode.servicerpc-address.hdp_cluster.nn1&lt;\/name&gt;\n    &lt;value&gt;master1.domain-A.com:8022&lt;\/value&gt;\n&lt;\/property&gt;\n&lt;property&gt;\n    &lt;name&gt;dfs.namenode.servicerpc-address.hdp_cluster.nn2&lt;\/name&gt;\n    &lt;value&gt;master2.domain-A.com:8022&lt;\/value&gt;\n&lt;\/property&gt;\n&lt;property&gt;\n    &lt;name&gt;dfs.namenode.http-address.hdp_cluster.nn1&lt;\/name&gt;\n    &lt;value&gt;master1.domain-A.com:50070&lt;\/value&gt;\n&lt;\/property&gt;\n&lt;property&gt;\n    &lt;name&gt;dfs.namenode.http-address.hdp_cluster.nn2&lt;\/name&gt;\n    &lt;value&gt;master2.domain-A.com:50070&lt;\/value&gt;\n&lt;\/property&gt;\n&lt;property&gt;\n    &lt;name&gt;dfs.namenode.https-address.hdp_cluster.nn1&lt;\/name&gt;\n    &lt;value&gt;master1.domain-A.com:50470&lt;\/value&gt;\n&lt;\/property&gt;\n&lt;property&gt;\n    &lt;name&gt;dfs.namenode.https-address.hdp_cluster.nn2&lt;\/name&gt;\n    &lt;value&gt;master2.domain-A.com:50470&lt;\/value&gt;\n&lt;\/property&gt;\n<\/pre>\n<p>\u00a0<\/p>\n<p>After the configuration you can use the nameservice directly in your commands, example<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"Shellscript\">hdfs dfs -ls hdfs:\/\/hdp_cluster\/tmp<\/pre>\n<p>\u00a0<\/p>\n<h2>Yarn token renewer<\/h2>\n<p>One of the other problem is the token renewing when using <strong>distcp<\/strong> or <strong>org.apache.hadoop.hbase.snapshot.ExportSnapshot<\/strong>, the job try to renew the token contacting the remote KDC.<\/p>\n<p>For avoiding this you have to configure an exclude in the YARN safety valve for <strong>yarn-GATEWAY-BASE<\/strong> on the CDP cluster<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"XML\">&lt;property&gt;\n    &lt;name&gt;mapreduce.job.hdfs-servers.token-renewal.exclude&lt;\/name&gt;\n    &lt;value&gt;hdp_cluster&lt;\/value&gt;\n&lt;\/property&gt;\n<\/pre>\n<p>\u00a0<\/p>\n<p>Validation of the communication was done using <strong>hdfs dfs -ls<\/strong> commands and <strong>distcp<\/strong><\/p>\n<h2>Finally the Hbase part<\/h2>\n<p>So we can&#8217;t use the Hbase replicaiton, but we can use the ExportSnapshot.<\/p>\n<p>I wrote scripts to do the following tasks automaticaly on the HDP source cluster<\/p>\n<ul>\n<li>take snapshot of all the tables<\/li>\n<li>use snapshotInfo to have Hfile number and size of the snapshot and saving information in a csv file<\/li>\n<li>generate a file for deleting snapshots<\/li>\n<\/ul>\n<p>On the CDP destination cluster scripts to do the following<\/p>\n<ul>\n<li>separate the tables csv by volumetry (Bytes, Megabytes, Gigabytes, Terabytes)<\/li>\n<li>using <strong>org.apache.hadoop.hbase.snapshot.ExportSnapshot<\/strong> for importing the snapshots in parallel<\/li>\n<li>clone the snapshots to recreate the tables<\/li>\n<\/ul>\n<h3>Taking a snapshot<\/h3>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"SQL\">snapshot 'namespace_name:table_name', 'snapshot_name'<\/pre>\n<h3>SnapshotInfo<\/h3>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"Generic Highlighting\">hbase org.apache.hadoop.hbase.snapshot.SnapshotInfo -stats -snapshot \"snapshot_name\"<\/pre>\n<p>This will give you :<\/p>\n<ul>\n<li>general information about the snapshot<\/li>\n<li>Number of HFiles<\/li>\n<li>Snapshot size<\/li>\n<li>Percentage shared with the source table<\/li>\n<\/ul>\n<p>This informations are important to optimize the number of mappers during the ExportSnapshot<\/p>\n<h3>ExportSnapshot<\/h3>\n<p>The following command was used on the CDP cluster to import hbase snapshot from HDP<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"Java\">hbase org.apache.hadoop.hbase.snapshot.ExportSnapshot -Dsnapshot.export.skip.tmp=true -Dmapreduce.job.queuename=A_BIG_QUEUE -snapshot \"snapshot_name\" -copy-from hdfs:\/\/hdp_cluster\/apps\/hbase\/data -copy-to hdfs:\/\/cdp_cluster\/hbase -target \"snapshot_name\" -mappers nb_mappers\n<\/pre>\n<ul>\n<li>snapshot.export.skip.tmp for avoid to use temporary files that could be removed if your copy is long<\/li>\n<li>mapreduce.job.queuename the queue name<\/li>\n<li>mappers is the number of mappers, I usualy set the snapshot files number with a maximum of 350<\/li>\n<\/ul>\n<h3>Clone snapshot<\/h3>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"SQL\">clone_snapshot 'snapshot_name', 'namespace_name:table_name'<\/pre>\n<p>This command is very fast and not depending of the snapshot size.<\/p>\n<h3><span style=\"color: #ff0000;\"><strong>Addendum<\/strong><\/span><\/h3>\n<p>If you are using a hdfs storage policy like ONE_SSD or ALL_SSD for region servers in a RS_group, using this method keep all the block&#8217;s replicas on standard DISK policy in the archive sub-directory of hbase (this is why the clone snapshot is very quick).<\/p>\n<p>So before releasing to production you have to do a major compaction of your tables using a SSD storage policy or the performances you expected will not be here and you will have a bad locality ratio.<\/p>\n<p>You can identify the regions\/tables impacted by doing a<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"Shellscript\">hdfs fsck &lt;full path name of the file&gt; -files -blocks -locations<\/pre>\n<p>You will see the storage policy of each replica and check that your block have at least one replica on SSD if using ONE_SSD.<\/p>\n<p>Other thing Hbase use sometimes a link like path name when the file is common with the snapshot with the name of the source namespace and the source table (namespace=&lt;source namespace&gt;, table=&lt;source table&gt;).<\/p>\n<p>\u00a0<\/p>\n<p>\u00a0<\/p>\n","protected":false},"excerpt":{"rendered":"<p>In this post I will describe my journey during a migration from Hortonworks HDP 2.6 to Cloudera CDP 7.1. I have to export the Hbase tables from an old and less secure cluster to a more recent and secure cluster. The application that used the Hbase tables can&#8217;t stop for a long time and have&hellip;<\/p>\n","protected":false},"author":2,"featured_media":165,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_exactmetrics_skip_tracking":false,"_exactmetrics_sitenote_active":false,"_exactmetrics_sitenote_note":"","_exactmetrics_sitenote_category":0,"footnotes":""},"categories":[22,4,33,14,21,15,19,16],"tags":[],"class_list":["post-229","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-ambari","category-bigdata","category-cloudera","category-hadoop","category-hbase","category-hortonworks","category-kerberos","category-yarn"],"_links":{"self":[{"href":"https:\/\/deleforterie.com\/wordpress\/index.php\/wp-json\/wp\/v2\/posts\/229","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/deleforterie.com\/wordpress\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/deleforterie.com\/wordpress\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/deleforterie.com\/wordpress\/index.php\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/deleforterie.com\/wordpress\/index.php\/wp-json\/wp\/v2\/comments?post=229"}],"version-history":[{"count":31,"href":"https:\/\/deleforterie.com\/wordpress\/index.php\/wp-json\/wp\/v2\/posts\/229\/revisions"}],"predecessor-version":[{"id":266,"href":"https:\/\/deleforterie.com\/wordpress\/index.php\/wp-json\/wp\/v2\/posts\/229\/revisions\/266"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/deleforterie.com\/wordpress\/index.php\/wp-json\/wp\/v2\/media\/165"}],"wp:attachment":[{"href":"https:\/\/deleforterie.com\/wordpress\/index.php\/wp-json\/wp\/v2\/media?parent=229"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/deleforterie.com\/wordpress\/index.php\/wp-json\/wp\/v2\/categories?post=229"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/deleforterie.com\/wordpress\/index.php\/wp-json\/wp\/v2\/tags?post=229"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}