Unable to drop table

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
3 messages Options
Reply | Threaded
Open this post in threaded view
|

Unable to drop table

Jean-Adrien
Hello,

I just tried hbase, following the instrcutions in the API. Everything works fine (create, insert, select) except that I can't drop a table through the HQL shell. When I input the command ``DROP TABLE test_table; '' . There is no error message but the table remains present.

In the master log I have the following exception:

2008-05-22 12:38:34,240 INFO org.apache.hadoop.ipc.Server: IPC Server handler 8 on 60000, call deleteTable(test_table) from 192.168.1.11:44546: error: org.apache.hadoop.hbase.TableNotDisabledException: test_table
org.apache.hadoop.hbase.TableNotDisabledException: test_table
        at org.apache.hadoop.hbase.HMaster$TableDelete.processScanItem(HMaster.java:2961)
        at org.apache.hadoop.hbase.HMaster$TableOperation.process(HMaster.java:2750)
        at org.apache.hadoop.hbase.HMaster.deleteTable(HMaster.java:2627)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
        at java.lang.reflect.Method.invoke(Method.java:585)
        at org.apache.hadoop.hbase.ipc.HbaseRPC$Server.call(HbaseRPC.java:413)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:901)

So, my table is not disabled...

For me it is not a real problem for now, but since I'm learning / testing a bit of hbase, I would like to know if it is a normal behaviour, if I had to change the table state and how.

versions: jdk 1.5.0 (sun) / hadoop 0.16.4 / hbase 0.1.2

By the way, is there an obvious link between dfs DataNodes cluster size and hbase HRegionServers cluster ? I'm not sure what is the meaning of the fact that the hadoop slaves file is a synonym of hbase regionServers file (as seen in the documentation API), and how the hbase deals with hadoop-site.xml config file ; I mean what is the purpose to have ${HADOOP_CONF} dir in the hbase classpath ?

Best regards. Thanks for your work.

J.-A.
Reply | Threaded
Open this post in threaded view
|

RE: Unable to drop table

Jim Kellerman
You must disable the table before it can be dropped.

disable test_table;

---
Jim Kellerman, Senior Engineer; Powerset


> -----Original Message-----
> From: Jean-Adrien [mailto:[hidden email]]
> Sent: Thursday, May 22, 2008 4:21 AM
> To: [hidden email]
> Subject: Unable to drop table
>
>
> Hello,
>
> I just tried hbase, following the instrcutions in the API.
> Everything works fine (create, insert, select) except that I
> can't drop a table through the HQL shell. When I input the
> command ``DROP TABLE test_table; '' . There is no error
> message but the table remains present.
>
> In the master log I have the following exception:
>
> 2008-05-22 12:38:34,240 INFO org.apache.hadoop.ipc.Server:
> IPC Server handler 8 on 60000, call deleteTable(test_table)
> from 192.168.1.11:44546:
> error: org.apache.hadoop.hbase.TableNotDisabledException: test_table
> org.apache.hadoop.hbase.TableNotDisabledException: test_table
>         at
> org.apache.hadoop.hbase.HMaster$TableDelete.processScanItem(HM
> aster.java:2961)
>         at
> org.apache.hadoop.hbase.HMaster$TableOperation.process(HMaster
> .java:2750)
>         at
> org.apache.hadoop.hbase.HMaster.deleteTable(HMaster.java:2627)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccess
> orImpl.java:39)
>         at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMeth
> odAccessorImpl.java:25)
>         at java.lang.reflect.Method.invoke(Method.java:585)
>         at
> org.apache.hadoop.hbase.ipc.HbaseRPC$Server.call(HbaseRPC.java:413)
>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:901)
>
> So, my table is not disabled...
>
> For me it is not a real problem for now, but since I'm
> learning / testing a bit of hbase, I would like to know if it
> is a normal behaviour, if I had to change the table state and how.
>
> versions: jdk 1.5.0 (sun) / hadoop 0.16.4 / hbase 0.1.2
>
> By the way, is there an obvious link between dfs DataNodes
> cluster size and hbase HRegionServers cluster ? I'm not sure
> what is the meaning of the fact that the hadoop slaves file
> is a synonym of hbase regionServers file (as seen in the
> documentation  http://hadoop.apache.org/hbase/docs/current/
> API ), and how the hbase deals with hadoop-site.xml config
> file ; I mean what is the purpose to have ${HADOOP_CONF} dir
> in the hbase classpath ?
>
> Best regards. Thanks for your work.
>
> J.-A.
> --
> View this message in context:
> http://www.nabble.com/Unable-to-drop-table-tp17402135p17402135.html
> Sent from the HBase User mailing list archive at Nabble.com.
>
>
> No virus found in this incoming message.
> Checked by AVG.
> Version: 8.0.100 / Virus Database: 269.23.21/1458 - Release
> Date: 5/21/2008 7:21 AM
>
No virus found in this outgoing message.
Checked by AVG.
Version: 8.0.100 / Virus Database: 269.23.21/1458 - Release Date: 5/21/2008 7:21 AM
Reply | Threaded
Open this post in threaded view
|

Re: Unable to drop table

stack-3
In reply to this post by Jean-Adrien
Jean-Adrien wrote:
> By the way, is there an obvious link between dfs DataNodes cluster size and
> hbase HRegionServers cluster ? I'm not sure what is the meaning of the fact
> that the hadoop slaves file is a synonym of hbase regionServers file (as
> seen in the documentation  http://hadoop.apache.org/hbase/docs/current/ API
> ), and how the hbase deals with hadoop-site.xml config file ; I mean what is
> the purpose to have ${HADOOP_CONF} dir in the hbase classpath ?
>  

There is no 'obvious' heuristic that we're aware of.

Optimally, regionservers would run on top of the datanode hosting their the regionservers' data (We have a bit of work to do to make this happen).  If a running regionserver was light as a feather, we'd suggest just putting up a regionserver on every datanode but unfortunately, they cost some so the set of regionservers and datanodes tend to diverge.  Access patterns, amount of hbase data, proportion of your hdfs data that is up in your hbase instance and strength of your hosting servers are some of the inputs to consider sizing your hbase cluster.  Because the two sets don't often match, we have a regionserver file apart from slaves for listing the hosts carrying hbase cluster members.

The documentation on what the regionservers file is, is misleading/incorrect.  I'll fix it so instead of 'synonym', it says 'is like the'.

Are you seeing the HADOOP_CONF_DIR in your CLASSPATH?  Its not there by default, not since we became a subproject at least.

Regards configuration in hadoop-site.xml, we don't read it unless you explicitly add it to the hbase CLASSPATH (You can add it by adding it in hbase-env.sh to the HBASE_CLASSPATH variable).  Most of the time, hbase doesn't need to know hadoop-site.xml site-specific configurations but if the configurations effect hdfs clients, then you'll want hbase to pick them up.  One example would be use of non-default replication count.  I'm sure there are others.

St.Ack