how to connect hbase to HA enable namenodes

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
3 messages Options
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

how to connect hbase to HA enable namenodes

Dhanushka Parakrama
Hi All

I have 5 node hadoop cluster with 2 HA name nodes runs in active / standby
mode .And im going to setup Hbase cluster on top of that .

So my issue is how can point Hmaster and Hregion servers to active namenode
from hbase config  ? ( When namenode failover happen Hreagion or Hmaster
servers should automatically point to the active namenode )

is it like below


*hbase-site.xml----------------------*

<configuration>
  <property>
    <name>hbase.rootdir</name>
    <value>hdfs://ha-cluster/hbase</value>
  </property>
  <property>
    <name>hbase.cluster.distributed</name>
    <value>true</value>
  </property>
  <property>
    <name>hbase.zookeeper.quorum</name>
    <value>dn1.cluster.com,nn1.cluster.com,nn2.cluster.com</value>
  </property>
</configuration>


but when i made the above config and runs *./start-hbase.sh ,  Hreagion*
servers throws the below error .and *Hmaster* runs on one of the namenode
servers and it starts fine without any error


aused by: java.lang.IllegalArgumentException:
java.net.UnknownHostException: ha-cluster





Hadoop configuration
================


*hdfs-site.xml --------------------*

<configuration>


<property>
 <name>dfs.namenode.name.dir</name>
 <value>/usr/local/hadoop/data/namenode</value>
 </property>
 <property>
 <name>dfs.replication</name>
 <value>3</value>
 </property>
 <property>
 <name>dfs.permissions</name>
 <value>false</value>
 </property>
 <property>
 <name>dfs.nameservices</name>
 <value>ha-cluster</value>
 </property>
 <property>
 <name>dfs.ha.namenodes.ha-cluster</name>
 <value>nn1,nn2</value>
 </property>
 <property>
 <name>dfs.namenode.rpc-address.ha-cluster.nn1</name>
 <value>nn1.cluster.com:9000</value>
 </property>
 <property>
 <name>dfs.namenode.rpc-address.ha-cluster.nn2</name>
 <value>nn2.cluster.com:9000</value>
 </property>
 <property>
 <name>dfs.namenode.http-address.ha-cluster.nn1</name>
 <value>nn1.cluster.com:50070</value>
 </property>
 <property>
 <name>dfs.namenode.http-address.ha-cluster.nn2</name>
 <value>nn2.cluster.com:50070</value>
 </property>
 <property>
 <name>dfs.namenode.shared.edits.dir</name>
 <value>qjournal://nn1.cluster.com:8485;nn2.cluster.com:8485;
dn1.cluster.com:8485/ha-cluster</value>
 </property>
 <property>
 <name>dfs.client.failover.proxy.provider.ha-cluster</name>
 <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
 </property>
 <property>
 <name>dfs.ha.automatic-failover.enabled</name>
 <value>true</value>
 </property>
 <property>
 <name>ha.zookeeper.quorum</name>
 <value>nn1.cluster.com:2181,nn2.cluster.com:2181,dn1.cluster.com:2181
</value>
 </property>

 <property>
 <name>dfs.ha.fencing.methods</name>
 <value>sshfence</value>
 </property>

 <property>
 <name>dfs.ha.fencing.ssh.private-key-files</name>
 <value>/home/hadoop/.ssh/id_rsa</value>
 </property>

<property>
<name>dfs.hosts.exclude</name>
<value>/usr/local/hadoop/current/etc/hadoop/exclude</value>
</property>

</configuration>


*Core-site.xml*
--------------------

<configuration>

<property>
<name>fs.defaultFS</name>
<value>hdfs://ha-cluster</value>
</property>

<property>
<name>dfs.journalnode.edits.dir</name>
<value>/usr/local/hadoop/data/jn</value>
</property>

</configuration>



Thank You
Dhanushka
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: how to connect hbase to HA enable namenodes

Ted Yu-3
For the node where you tried to start region server, was core-site.xml in
the classpath of hbase ?

Cheers

On Tue, May 23, 2017 at 7:58 AM, Dhanushka Parakrama <
[hidden email]> wrote:

> Hi All
>
> I have 5 node hadoop cluster with 2 HA name nodes runs in active / standby
> mode .And im going to setup Hbase cluster on top of that .
>
> So my issue is how can point Hmaster and Hregion servers to active namenode
> from hbase config  ? ( When namenode failover happen Hreagion or Hmaster
> servers should automatically point to the active namenode )
>
> is it like below
>
>
> *hbase-site.xml----------------------*
>
> <configuration>
>   <property>
>     <name>hbase.rootdir</name>
>     <value>hdfs://ha-cluster/hbase</value>
>   </property>
>   <property>
>     <name>hbase.cluster.distributed</name>
>     <value>true</value>
>   </property>
>   <property>
>     <name>hbase.zookeeper.quorum</name>
>     <value>dn1.cluster.com,nn1.cluster.com,nn2.cluster.com</value>
>   </property>
> </configuration>
>
>
> but when i made the above config and runs *./start-hbase.sh ,  Hreagion*
> servers throws the below error .and *Hmaster* runs on one of the namenode
> servers and it starts fine without any error
>
>
> aused by: java.lang.IllegalArgumentException:
> java.net.UnknownHostException: ha-cluster
>
>
>
>
>
> Hadoop configuration
> ================
>
>
> *hdfs-site.xml --------------------*
>
> <configuration>
>
>
> <property>
>  <name>dfs.namenode.name.dir</name>
>  <value>/usr/local/hadoop/data/namenode</value>
>  </property>
>  <property>
>  <name>dfs.replication</name>
>  <value>3</value>
>  </property>
>  <property>
>  <name>dfs.permissions</name>
>  <value>false</value>
>  </property>
>  <property>
>  <name>dfs.nameservices</name>
>  <value>ha-cluster</value>
>  </property>
>  <property>
>  <name>dfs.ha.namenodes.ha-cluster</name>
>  <value>nn1,nn2</value>
>  </property>
>  <property>
>  <name>dfs.namenode.rpc-address.ha-cluster.nn1</name>
>  <value>nn1.cluster.com:9000</value>
>  </property>
>  <property>
>  <name>dfs.namenode.rpc-address.ha-cluster.nn2</name>
>  <value>nn2.cluster.com:9000</value>
>  </property>
>  <property>
>  <name>dfs.namenode.http-address.ha-cluster.nn1</name>
>  <value>nn1.cluster.com:50070</value>
>  </property>
>  <property>
>  <name>dfs.namenode.http-address.ha-cluster.nn2</name>
>  <value>nn2.cluster.com:50070</value>
>  </property>
>  <property>
>  <name>dfs.namenode.shared.edits.dir</name>
>  <value>qjournal://nn1.cluster.com:8485;nn2.cluster.com:8485;
> dn1.cluster.com:8485/ha-cluster</value>
>  </property>
>  <property>
>  <name>dfs.client.failover.proxy.provider.ha-cluster</name>
>  <value>org.apache.hadoop.hdfs.server.namenode.ha.
> ConfiguredFailoverProxyProvider</value>
>  </property>
>  <property>
>  <name>dfs.ha.automatic-failover.enabled</name>
>  <value>true</value>
>  </property>
>  <property>
>  <name>ha.zookeeper.quorum</name>
>  <value>nn1.cluster.com:2181,nn2.cluster.com:2181,dn1.cluster.com:2181
> </value>
>  </property>
>
>  <property>
>  <name>dfs.ha.fencing.methods</name>
>  <value>sshfence</value>
>  </property>
>
>  <property>
>  <name>dfs.ha.fencing.ssh.private-key-files</name>
>  <value>/home/hadoop/.ssh/id_rsa</value>
>  </property>
>
> <property>
> <name>dfs.hosts.exclude</name>
> <value>/usr/local/hadoop/current/etc/hadoop/exclude</value>
> </property>
>
> </configuration>
>
>
> *Core-site.xml*
> --------------------
>
> <configuration>
>
> <property>
> <name>fs.defaultFS</name>
> <value>hdfs://ha-cluster</value>
> </property>
>
> <property>
> <name>dfs.journalnode.edits.dir</name>
> <value>/usr/local/hadoop/data/jn</value>
> </property>
>
> </configuration>
>
>
>
> Thank You
> Dhanushka
>
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: how to connect hbase to HA enable namenodes

Dhanushka Parakrama
Hi Ted

Thanks its fixed the issue

Thank You
Dhanushka

On 23 May 2017 at 20:32, Ted Yu <[hidden email]> wrote:

> For the node where you tried to start region server, was core-site.xml in
> the classpath of hbase ?
>
> Cheers
>
> On Tue, May 23, 2017 at 7:58 AM, Dhanushka Parakrama <
> [hidden email]> wrote:
>
> > Hi All
> >
> > I have 5 node hadoop cluster with 2 HA name nodes runs in active /
> standby
> > mode .And im going to setup Hbase cluster on top of that .
> >
> > So my issue is how can point Hmaster and Hregion servers to active
> namenode
> > from hbase config  ? ( When namenode failover happen Hreagion or Hmaster
> > servers should automatically point to the active namenode )
> >
> > is it like below
> >
> >
> > *hbase-site.xml----------------------*
> >
> > <configuration>
> >   <property>
> >     <name>hbase.rootdir</name>
> >     <value>hdfs://ha-cluster/hbase</value>
> >   </property>
> >   <property>
> >     <name>hbase.cluster.distributed</name>
> >     <value>true</value>
> >   </property>
> >   <property>
> >     <name>hbase.zookeeper.quorum</name>
> >     <value>dn1.cluster.com,nn1.cluster.com,nn2.cluster.com</value>
> >   </property>
> > </configuration>
> >
> >
> > but when i made the above config and runs *./start-hbase.sh ,  Hreagion*
> > servers throws the below error .and *Hmaster* runs on one of the namenode
> > servers and it starts fine without any error
> >
> >
> > aused by: java.lang.IllegalArgumentException:
> > java.net.UnknownHostException: ha-cluster
> >
> >
> >
> >
> >
> > Hadoop configuration
> > ================
> >
> >
> > *hdfs-site.xml --------------------*
> >
> > <configuration>
> >
> >
> > <property>
> >  <name>dfs.namenode.name.dir</name>
> >  <value>/usr/local/hadoop/data/namenode</value>
> >  </property>
> >  <property>
> >  <name>dfs.replication</name>
> >  <value>3</value>
> >  </property>
> >  <property>
> >  <name>dfs.permissions</name>
> >  <value>false</value>
> >  </property>
> >  <property>
> >  <name>dfs.nameservices</name>
> >  <value>ha-cluster</value>
> >  </property>
> >  <property>
> >  <name>dfs.ha.namenodes.ha-cluster</name>
> >  <value>nn1,nn2</value>
> >  </property>
> >  <property>
> >  <name>dfs.namenode.rpc-address.ha-cluster.nn1</name>
> >  <value>nn1.cluster.com:9000</value>
> >  </property>
> >  <property>
> >  <name>dfs.namenode.rpc-address.ha-cluster.nn2</name>
> >  <value>nn2.cluster.com:9000</value>
> >  </property>
> >  <property>
> >  <name>dfs.namenode.http-address.ha-cluster.nn1</name>
> >  <value>nn1.cluster.com:50070</value>
> >  </property>
> >  <property>
> >  <name>dfs.namenode.http-address.ha-cluster.nn2</name>
> >  <value>nn2.cluster.com:50070</value>
> >  </property>
> >  <property>
> >  <name>dfs.namenode.shared.edits.dir</name>
> >  <value>qjournal://nn1.cluster.com:8485;nn2.cluster.com:8485;
> > dn1.cluster.com:8485/ha-cluster</value>
> >  </property>
> >  <property>
> >  <name>dfs.client.failover.proxy.provider.ha-cluster</name>
> >  <value>org.apache.hadoop.hdfs.server.namenode.ha.
> > ConfiguredFailoverProxyProvider</value>
> >  </property>
> >  <property>
> >  <name>dfs.ha.automatic-failover.enabled</name>
> >  <value>true</value>
> >  </property>
> >  <property>
> >  <name>ha.zookeeper.quorum</name>
> >  <value>nn1.cluster.com:2181,nn2.cluster.com:2181,dn1.cluster.com:2181
> > </value>
> >  </property>
> >
> >  <property>
> >  <name>dfs.ha.fencing.methods</name>
> >  <value>sshfence</value>
> >  </property>
> >
> >  <property>
> >  <name>dfs.ha.fencing.ssh.private-key-files</name>
> >  <value>/home/hadoop/.ssh/id_rsa</value>
> >  </property>
> >
> > <property>
> > <name>dfs.hosts.exclude</name>
> > <value>/usr/local/hadoop/current/etc/hadoop/exclude</value>
> > </property>
> >
> > </configuration>
> >
> >
> > *Core-site.xml*
> > --------------------
> >
> > <configuration>
> >
> > <property>
> > <name>fs.defaultFS</name>
> > <value>hdfs://ha-cluster</value>
> > </property>
> >
> > <property>
> > <name>dfs.journalnode.edits.dir</name>
> > <value>/usr/local/hadoop/data/jn</value>
> > </property>
> >
> > </configuration>
> >
> >
> >
> > Thank You
> > Dhanushka
> >
>
Loading...