Quantcast

HBase - Performance issue

classic Classic list List threaded Threaded
12 messages Options
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

HBase - Performance issue

kzurek
The problem is that when I'm putting my data (multithreaded client, ~30MB/s traffic outgoing) into the cluster the load is equally spread over all RegionServer with 3.5% average CPU wait time (average CPU user: 51%). When I've added similar, mutlithreaded client that Scans for, let say, 100 last samples of randomly generated key from chosen time range, I'm getting high CPU wait time (20% and up) on two (or more if there is higher number of threads, default 10) random RegionServers. Therefore, machines that held those RS are getting very hot - one of the consequences is that number of store file is constantly increasing, up to the maximum limit. Rest of the RS are having 10-12% CPU wait time and everything seems to be OK (number of store files varies so they are being compacted and not increasing over time). Any ideas? Maybe  I could prioritize writes over reads somehow? Is it possible? If so what would be the best way to that and where it should be placed - on the client or cluster side)?

Cluster specification:
HBase Version 0.94.2-cdh4.2.0
Hadoop Version 2.0.0-cdh4.2.0
There are 6xDataNodes (5xHDD for storing data), 1xMasterNodes
Other settings:
 - Bloom filters (ROWCOL) set
 - Short circuit turned on
 - HDFS Block Size: 128MB
 - Java Heap Size of Namenode/Secondary Namenode in Bytes: 8 GiB
 - Java Heap Size of HBase RegionServer in Bytes: 12 GiB
 - Java Heap Size of HBase Master in Bytes: 4 GiB
 - Java Heap Size of DataNode in Bytes: 1 GiB (default)
Number of regions per RegionServer: 19 (total 114 regions on 6 RS)
Key design: <UUID><TIMESTAMP> -> UUID: 1-10M, TIMESTAMP: 1-N
Table design: 1 column family with 20 columns of 8 bytes

Get client:
Multiple threads
Each thread have its own tables instance with their Scanner.
Each thread have its own range of UUIDs and randomly draws beginning of time range to build rowkey properly (see above).
Each time Scan requests same amount of rows, but with random rowkey.
 
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: HBase - Performance issue

Anoop John
Hi
           How many request handlers are there in ur RS?  Can you up this
number and see?

-Anoop-
On Wed, Apr 24, 2013 at 3:42 PM, kzurek <[hidden email]> wrote:

> The problem is that when I'm putting my data (multithreaded client, ~30MB/s
> traffic outgoing) into the cluster the load is equally spread over all
> RegionServer with 3.5% average CPU wait time (average CPU user: 51%). When
> I've added similar, mutlithreaded client that Scans for, let say, 100 last
> samples of randomly generated key from chosen time range, I'm getting high
> CPU wait time (20% and up) on two (or more if there is higher number of
> threads, default 10) random RegionServers. Therefore, machines that held
> those RS are getting very hot - one of the consequences is that number of
> store file is constantly increasing, up to the maximum limit. Rest of the
> RS
> are having 10-12% CPU wait time and everything seems to be OK (number of
> store files varies so they are being compacted and not increasing over
> time). Any ideas? Maybe  I could prioritize writes over reads somehow? Is
> it
> possible? If so what would be the best way to that and where it should be
> placed - on the client or cluster side)?
>
> Cluster specification:
> HBase Version   0.94.2-cdh4.2.0
> Hadoop Version  2.0.0-cdh4.2.0
> There are 6xDataNodes (5xHDD for storing data), 1xMasterNodes
> Other settings:
>  - Bloom filters (ROWCOL) set
>  - Short circuit turned on
>  - HDFS Block Size: 128MB
>  - Java Heap Size of Namenode/Secondary Namenode in Bytes: 8 GiB
>  - Java Heap Size of HBase RegionServer in Bytes: 12 GiB
>  - Java Heap Size of HBase Master in Bytes: 4 GiB
>  - Java Heap Size of DataNode in Bytes: 1 GiB (default)
> Number of regions per RegionServer: 19 (total 114 regions on 6 RS)
> Key design: <UUID><TIMESTAMP> -> UUID: 1-10M, TIMESTAMP: 1-N
> Table design: 1 column family with 20 columns of 8 bytes
>
> Get client:
> Multiple threads
> Each thread have its own tables instance with their Scanner.
> Each thread have its own range of UUIDs and randomly draws beginning of
> time
> range to build rowkey properly (see above).
> Each time Scan requests same amount of rows, but with random rowkey.
>
>
>
>
>
> --
> View this message in context:
> http://apache-hbase.679495.n3.nabble.com/HBase-Performance-issue-tp4042836.html
> Sent from the HBase User mailing list archive at Nabble.com.
>
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: HBase - Performance issue

kzurek
I've following settings:
 hbase.master.handler.count = 25 (default value in CDH4.2)
 hbase.regionserver.handler.count = 20 (default 10)
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: HBase - Performance issue

lars hofhansl-2
In reply to this post by kzurek
You may have run into https://issues.apache.org/jira/browse/HBASE-7336 (which is in 0.94.4)
(Although I had not observed this effect as much when short circuit reads are enabled)



----- Original Message -----
From: kzurek <[hidden email]>
To: [hidden email]
Cc:
Sent: Wednesday, April 24, 2013 3:12 AM
Subject: HBase - Performance issue

The problem is that when I'm putting my data (multithreaded client, ~30MB/s
traffic outgoing) into the cluster the load is equally spread over all
RegionServer with 3.5% average CPU wait time (average CPU user: 51%). When
I've added similar, mutlithreaded client that Scans for, let say, 100 last
samples of randomly generated key from chosen time range, I'm getting high
CPU wait time (20% and up) on two (or more if there is higher number of
threads, default 10) random RegionServers. Therefore, machines that held
those RS are getting very hot - one of the consequences is that number of
store file is constantly increasing, up to the maximum limit. Rest of the RS
are having 10-12% CPU wait time and everything seems to be OK (number of
store files varies so they are being compacted and not increasing over
time). Any ideas? Maybe  I could prioritize writes over reads somehow? Is it
possible? If so what would be the best way to that and where it should be
placed - on the client or cluster side)?

Cluster specification:
HBase Version    0.94.2-cdh4.2.0
Hadoop Version    2.0.0-cdh4.2.0
There are 6xDataNodes (5xHDD for storing data), 1xMasterNodes
Other settings:
- Bloom filters (ROWCOL) set
- Short circuit turned on
- HDFS Block Size: 128MB
- Java Heap Size of Namenode/Secondary Namenode in Bytes: 8 GiB
- Java Heap Size of HBase RegionServer in Bytes: 12 GiB
- Java Heap Size of HBase Master in Bytes: 4 GiB
- Java Heap Size of DataNode in Bytes: 1 GiB (default)
Number of regions per RegionServer: 19 (total 114 regions on 6 RS)
Key design: <UUID><TIMESTAMP> -> UUID: 1-10M, TIMESTAMP: 1-N
Table design: 1 column family with 20 columns of 8 bytes

Get client:
Multiple threads
Each thread have its own tables instance with their Scanner.
Each thread have its own range of UUIDs and randomly draws beginning of time
range to build rowkey properly (see above).
Each time Scan requests same amount of rows, but with random rowkey.





--
View this message in context: http://apache-hbase.679495.n3.nabble.com/HBase-Performance-issue-tp4042836.html
Sent from the HBase User mailing list archive at Nabble.com.

Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: HBase - Performance issue

kiran-2
Lars,

We are facing a similar situation on the similar cluster configuration...
We are having high I/O wait percentages on some machines in our cluster...
We have short circuit reads enabled but still we are facing the similar
problem.. the cpu wait goes upto 50% also in some case while issuing scan
commands with multiple threads.. Is there a work around other than applying
the patch for 0.94.4 ??

Thanks
Kiran


On Thu, Apr 25, 2013 at 12:12 AM, lars hofhansl <[hidden email]> wrote:

> You may have run into https://issues.apache.org/jira/browse/HBASE-7336
> (which is in 0.94.4)
> (Although I had not observed this effect as much when short circuit reads
> are enabled)
>
>
>
> ----- Original Message -----
> From: kzurek <[hidden email]>
> To: [hidden email]
> Cc:
> Sent: Wednesday, April 24, 2013 3:12 AM
> Subject: HBase - Performance issue
>
> The problem is that when I'm putting my data (multithreaded client, ~30MB/s
> traffic outgoing) into the cluster the load is equally spread over all
> RegionServer with 3.5% average CPU wait time (average CPU user: 51%). When
> I've added similar, mutlithreaded client that Scans for, let say, 100 last
> samples of randomly generated key from chosen time range, I'm getting high
> CPU wait time (20% and up) on two (or more if there is higher number of
> threads, default 10) random RegionServers. Therefore, machines that held
> those RS are getting very hot - one of the consequences is that number of
> store file is constantly increasing, up to the maximum limit. Rest of the
> RS
> are having 10-12% CPU wait time and everything seems to be OK (number of
> store files varies so they are being compacted and not increasing over
> time). Any ideas? Maybe  I could prioritize writes over reads somehow? Is
> it
> possible? If so what would be the best way to that and where it should be
> placed - on the client or cluster side)?
>
> Cluster specification:
> HBase Version    0.94.2-cdh4.2.0
> Hadoop Version    2.0.0-cdh4.2.0
> There are 6xDataNodes (5xHDD for storing data), 1xMasterNodes
> Other settings:
> - Bloom filters (ROWCOL) set
> - Short circuit turned on
> - HDFS Block Size: 128MB
> - Java Heap Size of Namenode/Secondary Namenode in Bytes: 8 GiB
> - Java Heap Size of HBase RegionServer in Bytes: 12 GiB
> - Java Heap Size of HBase Master in Bytes: 4 GiB
> - Java Heap Size of DataNode in Bytes: 1 GiB (default)
> Number of regions per RegionServer: 19 (total 114 regions on 6 RS)
> Key design: <UUID><TIMESTAMP> -> UUID: 1-10M, TIMESTAMP: 1-N
> Table design: 1 column family with 20 columns of 8 bytes
>
> Get client:
> Multiple threads
> Each thread have its own tables instance with their Scanner.
> Each thread have its own range of UUIDs and randomly draws beginning of
> time
> range to build rowkey properly (see above).
> Each time Scan requests same amount of rows, but with random rowkey.
>
>
>
>
>
> --
> View this message in context:
> http://apache-hbase.679495.n3.nabble.com/HBase-Performance-issue-tp4042836.html
> Sent from the HBase User mailing list archive at Nabble.com.
>
>


--
Thank you
Kiran Sarvabhotla

-----Even a correct decision is wrong when it is taken late
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: HBase - Performance issue

kiran-2
Also the hbase version is 0.94.1


On Sun, Sep 7, 2014 at 12:00 AM, kiran <[hidden email]> wrote:

> Lars,
>
> We are facing a similar situation on the similar cluster configuration...
> We are having high I/O wait percentages on some machines in our cluster...
> We have short circuit reads enabled but still we are facing the similar
> problem.. the cpu wait goes upto 50% also in some case while issuing scan
> commands with multiple threads.. Is there a work around other than applying
> the patch for 0.94.4 ??
>
> Thanks
> Kiran
>
>
> On Thu, Apr 25, 2013 at 12:12 AM, lars hofhansl <[hidden email]> wrote:
>
>> You may have run into https://issues.apache.org/jira/browse/HBASE-7336
>> (which is in 0.94.4)
>> (Although I had not observed this effect as much when short circuit reads
>> are enabled)
>>
>>
>>
>> ----- Original Message -----
>> From: kzurek <[hidden email]>
>> To: [hidden email]
>> Cc:
>> Sent: Wednesday, April 24, 2013 3:12 AM
>> Subject: HBase - Performance issue
>>
>> The problem is that when I'm putting my data (multithreaded client,
>> ~30MB/s
>> traffic outgoing) into the cluster the load is equally spread over all
>> RegionServer with 3.5% average CPU wait time (average CPU user: 51%). When
>> I've added similar, mutlithreaded client that Scans for, let say, 100 last
>> samples of randomly generated key from chosen time range, I'm getting high
>> CPU wait time (20% and up) on two (or more if there is higher number of
>> threads, default 10) random RegionServers. Therefore, machines that held
>> those RS are getting very hot - one of the consequences is that number of
>> store file is constantly increasing, up to the maximum limit. Rest of the
>> RS
>> are having 10-12% CPU wait time and everything seems to be OK (number of
>> store files varies so they are being compacted and not increasing over
>> time). Any ideas? Maybe  I could prioritize writes over reads somehow? Is
>> it
>> possible? If so what would be the best way to that and where it should be
>> placed - on the client or cluster side)?
>>
>> Cluster specification:
>> HBase Version    0.94.2-cdh4.2.0
>> Hadoop Version    2.0.0-cdh4.2.0
>> There are 6xDataNodes (5xHDD for storing data), 1xMasterNodes
>> Other settings:
>> - Bloom filters (ROWCOL) set
>> - Short circuit turned on
>> - HDFS Block Size: 128MB
>> - Java Heap Size of Namenode/Secondary Namenode in Bytes: 8 GiB
>> - Java Heap Size of HBase RegionServer in Bytes: 12 GiB
>> - Java Heap Size of HBase Master in Bytes: 4 GiB
>> - Java Heap Size of DataNode in Bytes: 1 GiB (default)
>> Number of regions per RegionServer: 19 (total 114 regions on 6 RS)
>> Key design: <UUID><TIMESTAMP> -> UUID: 1-10M, TIMESTAMP: 1-N
>> Table design: 1 column family with 20 columns of 8 bytes
>>
>> Get client:
>> Multiple threads
>> Each thread have its own tables instance with their Scanner.
>> Each thread have its own range of UUIDs and randomly draws beginning of
>> time
>> range to build rowkey properly (see above).
>> Each time Scan requests same amount of rows, but with random rowkey.
>>
>>
>>
>>
>>
>> --
>> View this message in context:
>> http://apache-hbase.679495.n3.nabble.com/HBase-Performance-issue-tp4042836.html
>> Sent from the HBase User mailing list archive at Nabble.com.
>>
>>
>
>
> --
> Thank you
> Kiran Sarvabhotla
>
> -----Even a correct decision is wrong when it is taken late
>
>


--
Thank you
Kiran Sarvabhotla

-----Even a correct decision is wrong when it is taken late
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: HBase - Performance issue

Michael Segel
What type of drives. controllers, and network bandwidth do you have?

Just curious.


On Sep 6, 2014, at 7:37 PM, kiran <[hidden email]> wrote:

> Also the hbase version is 0.94.1
>
>
> On Sun, Sep 7, 2014 at 12:00 AM, kiran <[hidden email]> wrote:
>
>> Lars,
>>
>> We are facing a similar situation on the similar cluster configuration...
>> We are having high I/O wait percentages on some machines in our cluster...
>> We have short circuit reads enabled but still we are facing the similar
>> problem.. the cpu wait goes upto 50% also in some case while issuing scan
>> commands with multiple threads.. Is there a work around other than applying
>> the patch for 0.94.4 ??
>>
>> Thanks
>> Kiran
>>
>>
>> On Thu, Apr 25, 2013 at 12:12 AM, lars hofhansl <[hidden email]> wrote:
>>
>>> You may have run into https://issues.apache.org/jira/browse/HBASE-7336
>>> (which is in 0.94.4)
>>> (Although I had not observed this effect as much when short circuit reads
>>> are enabled)
>>>
>>>
>>>
>>> ----- Original Message -----
>>> From: kzurek <[hidden email]>
>>> To: [hidden email]
>>> Cc:
>>> Sent: Wednesday, April 24, 2013 3:12 AM
>>> Subject: HBase - Performance issue
>>>
>>> The problem is that when I'm putting my data (multithreaded client,
>>> ~30MB/s
>>> traffic outgoing) into the cluster the load is equally spread over all
>>> RegionServer with 3.5% average CPU wait time (average CPU user: 51%). When
>>> I've added similar, mutlithreaded client that Scans for, let say, 100 last
>>> samples of randomly generated key from chosen time range, I'm getting high
>>> CPU wait time (20% and up) on two (or more if there is higher number of
>>> threads, default 10) random RegionServers. Therefore, machines that held
>>> those RS are getting very hot - one of the consequences is that number of
>>> store file is constantly increasing, up to the maximum limit. Rest of the
>>> RS
>>> are having 10-12% CPU wait time and everything seems to be OK (number of
>>> store files varies so they are being compacted and not increasing over
>>> time). Any ideas? Maybe  I could prioritize writes over reads somehow? Is
>>> it
>>> possible? If so what would be the best way to that and where it should be
>>> placed - on the client or cluster side)?
>>>
>>> Cluster specification:
>>> HBase Version    0.94.2-cdh4.2.0
>>> Hadoop Version    2.0.0-cdh4.2.0
>>> There are 6xDataNodes (5xHDD for storing data), 1xMasterNodes
>>> Other settings:
>>> - Bloom filters (ROWCOL) set
>>> - Short circuit turned on
>>> - HDFS Block Size: 128MB
>>> - Java Heap Size of Namenode/Secondary Namenode in Bytes: 8 GiB
>>> - Java Heap Size of HBase RegionServer in Bytes: 12 GiB
>>> - Java Heap Size of HBase Master in Bytes: 4 GiB
>>> - Java Heap Size of DataNode in Bytes: 1 GiB (default)
>>> Number of regions per RegionServer: 19 (total 114 regions on 6 RS)
>>> Key design: <UUID><TIMESTAMP> -> UUID: 1-10M, TIMESTAMP: 1-N
>>> Table design: 1 column family with 20 columns of 8 bytes
>>>
>>> Get client:
>>> Multiple threads
>>> Each thread have its own tables instance with their Scanner.
>>> Each thread have its own range of UUIDs and randomly draws beginning of
>>> time
>>> range to build rowkey properly (see above).
>>> Each time Scan requests same amount of rows, but with random rowkey.
>>>
>>>
>>>
>>>
>>>
>>> --
>>> View this message in context:
>>> http://apache-hbase.679495.n3.nabble.com/HBase-Performance-issue-tp4042836.html
>>> Sent from the HBase User mailing list archive at Nabble.com.
>>>
>>>
>>
>>
>> --
>> Thank you
>> Kiran Sarvabhotla
>>
>> -----Even a correct decision is wrong when it is taken late
>>
>>
>
>
> --
> Thank you
> Kiran Sarvabhotla
>
> -----Even a correct decision is wrong when it is taken late

Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: HBase - Performance issue

lars hofhansl-2
In reply to this post by kiran-2
Thinking about it again, if you ran into a HBASE-7336 you'd see high CPU load, but *not* IOWAIT.
0.94 is at 0.94.23, you should upgrade. A lot of fixes, improvements, and performance enhancements went in since 0.94.4.
You can do a rolling upgrade straight to 0.94.23.

With that out of the way, can you post a jstack of the processes that experience high wait times?

-- Lars



________________________________
 From: kiran <[hidden email]>
To: [hidden email]; lars hofhansl <[hidden email]>
Sent: Saturday, September 6, 2014 11:30 AM
Subject: Re: HBase - Performance issue
 


Lars,

We are facing a similar situation on the similar cluster configuration... We are having high I/O wait percentages on some machines in our cluster... We have short circuit reads enabled but still we are facing the similar problem.. the cpu wait goes upto 50% also in some case while issuing scan commands with multiple threads.. Is there a work around other than applying the patch for 0.94.4 ??

Thanks
Kiran



On Thu, Apr 25, 2013 at 12:12 AM, lars hofhansl <[hidden email]> wrote:

You may have run into https://issues.apache.org/jira/browse/HBASE-7336 (which is in 0.94.4)

>(Although I had not observed this effect as much when short circuit reads are enabled)
>
>
>
>
>----- Original Message -----
>From: kzurek <[hidden email]>
>To: [hidden email]
>Cc:
>Sent: Wednesday, April 24, 2013 3:12 AM
>Subject: HBase - Performance issue
>
>The problem is that when I'm putting my data (multithreaded client, ~30MB/s
>traffic outgoing) into the cluster the load is equally spread over all
>RegionServer with 3.5% average CPU wait time (average CPU user: 51%). When
>I've added similar, mutlithreaded client that Scans for, let say, 100 last
>samples of randomly generated key from chosen time range, I'm getting high
>CPU wait time (20% and up) on two (or more if there is higher number of
>threads, default 10) random RegionServers. Therefore, machines that held
>those RS are getting very hot - one of the consequences is that number of
>store file is constantly increasing, up to the maximum limit. Rest of the RS
>are having 10-12% CPU wait time and everything seems to be OK (number of
>store files varies so they are being compacted and not increasing over
>time). Any ideas? Maybe  I could prioritize writes over reads somehow? Is it
>possible? If so what would be the best way to that and where it should be
>placed - on the client or cluster side)?
>
>Cluster specification:
>HBase Version    0.94.2-cdh4.2.0
>Hadoop Version    2.0.0-cdh4.2.0
>There are 6xDataNodes (5xHDD for storing data), 1xMasterNodes
>Other settings:
>- Bloom filters (ROWCOL) set
>- Short circuit turned on
>- HDFS Block Size: 128MB
>- Java Heap Size of Namenode/Secondary Namenode in Bytes: 8 GiB
>- Java Heap Size of HBase RegionServer in Bytes: 12 GiB
>- Java Heap Size of HBase Master in Bytes: 4 GiB
>- Java Heap Size of DataNode in Bytes: 1 GiB (default)
>Number of regions per RegionServer: 19 (total 114 regions on 6 RS)
>Key design: <UUID><TIMESTAMP> -> UUID: 1-10M, TIMESTAMP: 1-N
>Table design: 1 column family with 20 columns of 8 bytes
>
>Get client:
>Multiple threads
>Each thread have its own tables instance with their Scanner.
>Each thread have its own range of UUIDs and randomly draws beginning of time
>range to build rowkey properly (see above).
>Each time Scan requests same amount of rows, but with random rowkey.
>
>
>
>
>
>--
>View this message in context: http://apache-hbase.679495.n3.nabble.com/HBase-Performance-issue-tp4042836.html
>Sent from the HBase User mailing list archive at Nabble.com.
>
>


--

Thank you
Kiran Sarvabhotla

-----Even a correct decision is wrong when it is taken late
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: HBase - Performance issue

kiran-2
Hi Lars,

Ours is a problem of I/O wait and network bandwidth increase around the
same time....

Lars,

Sorry to say this... our's is a production cluster and we ideally should
never want a downtime... Also lars, we had very miserable experience while
upgrading from 0.92 to 0.94... There was a never a mention of change in
split policy in the release notes... and the policy was not ideal for our
cluster and it took us atleast a week to figure out that....

Our cluster runs on commodity hardware with big regions (5-10gb)... Region
sever mem is 10gb...
2TB SATA Hard disks (5400 - 7200 rpm)... Internal network bandwidth is 1 gig

So please suggest us any work around with 0.94.1....


On Sun, Sep 7, 2014 at 8:42 AM, lars hofhansl <[hidden email]> wrote:

> Thinking about it again, if you ran into a HBASE-7336 you'd see high CPU
> load, but *not* IOWAIT.
> 0.94 is at 0.94.23, you should upgrade. A lot of fixes, improvements, and
> performance enhancements went in since 0.94.4.
> You can do a rolling upgrade straight to 0.94.23.
>
> With that out of the way, can you post a jstack of the processes that
> experience high wait times?
>
> -- Lars
>
>   ------------------------------
>  *From:* kiran <[hidden email]>
> *To:* [hidden email]; lars hofhansl <[hidden email]>
> *Sent:* Saturday, September 6, 2014 11:30 AM
> *Subject:* Re: HBase - Performance issue
>
> Lars,
>
> We are facing a similar situation on the similar cluster configuration...
> We are having high I/O wait percentages on some machines in our cluster...
> We have short circuit reads enabled but still we are facing the similar
> problem.. the cpu wait goes upto 50% also in some case while issuing scan
> commands with multiple threads.. Is there a work around other than applying
> the patch for 0.94.4 ??
>
> Thanks
> Kiran
>
>
> On Thu, Apr 25, 2013 at 12:12 AM, lars hofhansl <[hidden email]> wrote:
>
> You may have run into https://issues.apache.org/jira/browse/HBASE-7336
> (which is in 0.94.4)
> (Although I had not observed this effect as much when short circuit reads
> are enabled)
>
>
>
> ----- Original Message -----
> From: kzurek <[hidden email]>
> To: [hidden email]
> Cc:
> Sent: Wednesday, April 24, 2013 3:12 AM
> Subject: HBase - Performance issue
>
> The problem is that when I'm putting my data (multithreaded client, ~30MB/s
> traffic outgoing) into the cluster the load is equally spread over all
> RegionServer with 3.5% average CPU wait time (average CPU user: 51%). When
> I've added similar, mutlithreaded client that Scans for, let say, 100 last
> samples of randomly generated key from chosen time range, I'm getting high
> CPU wait time (20% and up) on two (or more if there is higher number of
> threads, default 10) random RegionServers. Therefore, machines that held
> those RS are getting very hot - one of the consequences is that number of
> store file is constantly increasing, up to the maximum limit. Rest of the
> RS
> are having 10-12% CPU wait time and everything seems to be OK (number of
> store files varies so they are being compacted and not increasing over
> time). Any ideas? Maybe  I could prioritize writes over reads somehow? Is
> it
> possible? If so what would be the best way to that and where it should be
> placed - on the client or cluster side)?
>
> Cluster specification:
> HBase Version    0.94.2-cdh4.2.0
> Hadoop Version    2.0.0-cdh4.2.0
> There are 6xDataNodes (5xHDD for storing data), 1xMasterNodes
> Other settings:
> - Bloom filters (ROWCOL) set
> - Short circuit turned on
> - HDFS Block Size: 128MB
> - Java Heap Size of Namenode/Secondary Namenode in Bytes: 8 GiB
> - Java Heap Size of HBase RegionServer in Bytes: 12 GiB
> - Java Heap Size of HBase Master in Bytes: 4 GiB
> - Java Heap Size of DataNode in Bytes: 1 GiB (default)
> Number of regions per RegionServer: 19 (total 114 regions on 6 RS)
> Key design: <UUID><TIMESTAMP> -> UUID: 1-10M, TIMESTAMP: 1-N
> Table design: 1 column family with 20 columns of 8 bytes
>
> Get client:
> Multiple threads
> Each thread have its own tables instance with their Scanner.
> Each thread have its own range of UUIDs and randomly draws beginning of
> time
> range to build rowkey properly (see above).
> Each time Scan requests same amount of rows, but with random rowkey.
>
>
>
>
>
> --
> View this message in context:
> http://apache-hbase.679495.n3.nabble.com/HBase-Performance-issue-tp4042836.html
> Sent from the HBase User mailing list archive at Nabble.com.
>
>
>
>
> --
> Thank you
> Kiran Sarvabhotla
>
> -----Even a correct decision is wrong when it is taken late
>
>
>
>


--
Thank you
Kiran Sarvabhotla

-----Even a correct decision is wrong when it is taken late
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: HBase - Performance issue

kiran-2
We have this setting enabled also...

<property>
    <name>dfs.client.read.shortcircuit</name>
    <value>true</value>
</property>

On Mon, Sep 8, 2014 at 12:53 PM, kiran <[hidden email]> wrote:

> Hi Lars,
>
> Ours is a problem of I/O wait and network bandwidth increase around the
> same time....
>
> Lars,
>
> Sorry to say this... our's is a production cluster and we ideally should
> never want a downtime... Also lars, we had very miserable experience while
> upgrading from 0.92 to 0.94... There was a never a mention of change in
> split policy in the release notes... and the policy was not ideal for our
> cluster and it took us atleast a week to figure out that....
>
> Our cluster runs on commodity hardware with big regions (5-10gb)... Region
> sever mem is 10gb...
> 2TB SATA Hard disks (5400 - 7200 rpm)... Internal network bandwidth is 1
> gig
>
> So please suggest us any work around with 0.94.1....
>
>
> On Sun, Sep 7, 2014 at 8:42 AM, lars hofhansl <[hidden email]> wrote:
>
>> Thinking about it again, if you ran into a HBASE-7336 you'd see high CPU
>> load, but *not* IOWAIT.
>> 0.94 is at 0.94.23, you should upgrade. A lot of fixes, improvements, and
>> performance enhancements went in since 0.94.4.
>> You can do a rolling upgrade straight to 0.94.23.
>>
>> With that out of the way, can you post a jstack of the processes that
>> experience high wait times?
>>
>> -- Lars
>>
>>   ------------------------------
>>  *From:* kiran <[hidden email]>
>> *To:* [hidden email]; lars hofhansl <[hidden email]>
>> *Sent:* Saturday, September 6, 2014 11:30 AM
>> *Subject:* Re: HBase - Performance issue
>>
>> Lars,
>>
>> We are facing a similar situation on the similar cluster configuration...
>> We are having high I/O wait percentages on some machines in our cluster...
>> We have short circuit reads enabled but still we are facing the similar
>> problem.. the cpu wait goes upto 50% also in some case while issuing scan
>> commands with multiple threads.. Is there a work around other than applying
>> the patch for 0.94.4 ??
>>
>> Thanks
>> Kiran
>>
>>
>> On Thu, Apr 25, 2013 at 12:12 AM, lars hofhansl <[hidden email]> wrote:
>>
>> You may have run into https://issues.apache.org/jira/browse/HBASE-7336
>> (which is in 0.94.4)
>> (Although I had not observed this effect as much when short circuit reads
>> are enabled)
>>
>>
>>
>> ----- Original Message -----
>> From: kzurek <[hidden email]>
>> To: [hidden email]
>> Cc:
>> Sent: Wednesday, April 24, 2013 3:12 AM
>> Subject: HBase - Performance issue
>>
>> The problem is that when I'm putting my data (multithreaded client,
>> ~30MB/s
>> traffic outgoing) into the cluster the load is equally spread over all
>> RegionServer with 3.5% average CPU wait time (average CPU user: 51%). When
>> I've added similar, mutlithreaded client that Scans for, let say, 100 last
>> samples of randomly generated key from chosen time range, I'm getting high
>> CPU wait time (20% and up) on two (or more if there is higher number of
>> threads, default 10) random RegionServers. Therefore, machines that held
>> those RS are getting very hot - one of the consequences is that number of
>> store file is constantly increasing, up to the maximum limit. Rest of the
>> RS
>> are having 10-12% CPU wait time and everything seems to be OK (number of
>> store files varies so they are being compacted and not increasing over
>> time). Any ideas? Maybe  I could prioritize writes over reads somehow? Is
>> it
>> possible? If so what would be the best way to that and where it should be
>> placed - on the client or cluster side)?
>>
>> Cluster specification:
>> HBase Version    0.94.2-cdh4.2.0
>> Hadoop Version    2.0.0-cdh4.2.0
>> There are 6xDataNodes (5xHDD for storing data), 1xMasterNodes
>> Other settings:
>> - Bloom filters (ROWCOL) set
>> - Short circuit turned on
>> - HDFS Block Size: 128MB
>> - Java Heap Size of Namenode/Secondary Namenode in Bytes: 8 GiB
>> - Java Heap Size of HBase RegionServer in Bytes: 12 GiB
>> - Java Heap Size of HBase Master in Bytes: 4 GiB
>> - Java Heap Size of DataNode in Bytes: 1 GiB (default)
>> Number of regions per RegionServer: 19 (total 114 regions on 6 RS)
>> Key design: <UUID><TIMESTAMP> -> UUID: 1-10M, TIMESTAMP: 1-N
>> Table design: 1 column family with 20 columns of 8 bytes
>>
>> Get client:
>> Multiple threads
>> Each thread have its own tables instance with their Scanner.
>> Each thread have its own range of UUIDs and randomly draws beginning of
>> time
>> range to build rowkey properly (see above).
>> Each time Scan requests same amount of rows, but with random rowkey.
>>
>>
>>
>>
>>
>> --
>> View this message in context:
>> http://apache-hbase.679495.n3.nabble.com/HBase-Performance-issue-tp4042836.html
>> Sent from the HBase User mailing list archive at Nabble.com.
>>
>>
>>
>>
>> --
>> Thank you
>> Kiran Sarvabhotla
>>
>> -----Even a correct decision is wrong when it is taken late
>>
>>
>>
>>
>
>
> --
> Thank you
> Kiran Sarvabhotla
>
> -----Even a correct decision is wrong when it is taken late
>
>


--
Thank you
Kiran Sarvabhotla

-----Even a correct decision is wrong when it is taken late
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: HBase - Performance issue

Andrew Purtell
In reply to this post by kiran-2
What about providing the jstack as Lars suggested? That doesn't
require you to upgrade (yet)

0.94.23 is the same major version as 0.94.1. Upgrading to this version
is not the same process as a major upgrade from 0.92 to 0.94. Changes
like the split policy difference you mention don't happen in point
releases.  You should consider upgrading to the latest 0.94.x, if not
now than at some point, because a volunteer open source community
really can only support the latest release of a major version. You can
insist on working with a (now, very old) release, but we might not be
able to help you much.


On Mon, Sep 8, 2014 at 12:23 AM, kiran <[hidden email]> wrote:

> Hi Lars,
>
> Ours is a problem of I/O wait and network bandwidth increase around the
> same time....
>
> Lars,
>
> Sorry to say this... our's is a production cluster and we ideally should
> never want a downtime... Also lars, we had very miserable experience while
> upgrading from 0.92 to 0.94... There was a never a mention of change in
> split policy in the release notes... and the policy was not ideal for our
> cluster and it took us atleast a week to figure out that....
>
> Our cluster runs on commodity hardware with big regions (5-10gb)... Region
> sever mem is 10gb...
> 2TB SATA Hard disks (5400 - 7200 rpm)... Internal network bandwidth is 1 gig
>
> So please suggest us any work around with 0.94.1....
>
>
> On Sun, Sep 7, 2014 at 8:42 AM, lars hofhansl <[hidden email]> wrote:
>
>> Thinking about it again, if you ran into a HBASE-7336 you'd see high CPU
>> load, but *not* IOWAIT.
>> 0.94 is at 0.94.23, you should upgrade. A lot of fixes, improvements, and
>> performance enhancements went in since 0.94.4.
>> You can do a rolling upgrade straight to 0.94.23.
>>
>> With that out of the way, can you post a jstack of the processes that
>> experience high wait times?
>>
>> -- Lars
>>
>>   ------------------------------
>>  *From:* kiran <[hidden email]>
>> *To:* [hidden email]; lars hofhansl <[hidden email]>
>> *Sent:* Saturday, September 6, 2014 11:30 AM
>> *Subject:* Re: HBase - Performance issue
>>
>> Lars,
>>
>> We are facing a similar situation on the similar cluster configuration...
>> We are having high I/O wait percentages on some machines in our cluster...
>> We have short circuit reads enabled but still we are facing the similar
>> problem.. the cpu wait goes upto 50% also in some case while issuing scan
>> commands with multiple threads.. Is there a work around other than applying
>> the patch for 0.94.4 ??
>>
>> Thanks
>> Kiran
>>
>>
>> On Thu, Apr 25, 2013 at 12:12 AM, lars hofhansl <[hidden email]> wrote:
>>
>> You may have run into https://issues.apache.org/jira/browse/HBASE-7336
>> (which is in 0.94.4)
>> (Although I had not observed this effect as much when short circuit reads
>> are enabled)
>>
>>
>>
>> ----- Original Message -----
>> From: kzurek <[hidden email]>
>> To: [hidden email]
>> Cc:
>> Sent: Wednesday, April 24, 2013 3:12 AM
>> Subject: HBase - Performance issue
>>
>> The problem is that when I'm putting my data (multithreaded client, ~30MB/s
>> traffic outgoing) into the cluster the load is equally spread over all
>> RegionServer with 3.5% average CPU wait time (average CPU user: 51%). When
>> I've added similar, mutlithreaded client that Scans for, let say, 100 last
>> samples of randomly generated key from chosen time range, I'm getting high
>> CPU wait time (20% and up) on two (or more if there is higher number of
>> threads, default 10) random RegionServers. Therefore, machines that held
>> those RS are getting very hot - one of the consequences is that number of
>> store file is constantly increasing, up to the maximum limit. Rest of the
>> RS
>> are having 10-12% CPU wait time and everything seems to be OK (number of
>> store files varies so they are being compacted and not increasing over
>> time). Any ideas? Maybe  I could prioritize writes over reads somehow? Is
>> it
>> possible? If so what would be the best way to that and where it should be
>> placed - on the client or cluster side)?
>>
>> Cluster specification:
>> HBase Version    0.94.2-cdh4.2.0
>> Hadoop Version    2.0.0-cdh4.2.0
>> There are 6xDataNodes (5xHDD for storing data), 1xMasterNodes
>> Other settings:
>> - Bloom filters (ROWCOL) set
>> - Short circuit turned on
>> - HDFS Block Size: 128MB
>> - Java Heap Size of Namenode/Secondary Namenode in Bytes: 8 GiB
>> - Java Heap Size of HBase RegionServer in Bytes: 12 GiB
>> - Java Heap Size of HBase Master in Bytes: 4 GiB
>> - Java Heap Size of DataNode in Bytes: 1 GiB (default)
>> Number of regions per RegionServer: 19 (total 114 regions on 6 RS)
>> Key design: <UUID><TIMESTAMP> -> UUID: 1-10M, TIMESTAMP: 1-N
>> Table design: 1 column family with 20 columns of 8 bytes
>>
>> Get client:
>> Multiple threads
>> Each thread have its own tables instance with their Scanner.
>> Each thread have its own range of UUIDs and randomly draws beginning of
>> time
>> range to build rowkey properly (see above).
>> Each time Scan requests same amount of rows, but with random rowkey.
>>
>>
>>
>>
>>
>> --
>> View this message in context:
>> http://apache-hbase.679495.n3.nabble.com/HBase-Performance-issue-tp4042836.html
>> Sent from the HBase User mailing list archive at Nabble.com.
>>
>>
>>
>>
>> --
>> Thank you
>> Kiran Sarvabhotla
>>
>> -----Even a correct decision is wrong when it is taken late
>>
>>
>>
>>
>
>
> --
> Thank you
> Kiran Sarvabhotla
>
> -----Even a correct decision is wrong when it is taken late



--
Best regards,

   - Andy

Problems worthy of attack prove their worth by hitting back. - Piet
Hein (via Tom White)
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: HBase - Performance issue

Michael Segel
In reply to this post by kiran-2

So you have large RS and you have large regions. Your regions are huge relative to your RS memory heap.
(Not ideal.)

You have slow drives (5400rpm) and you have 1GbE network.
Do didn’t say how many drives per server.

Under load, you will saturate your network with just 4 drives. (Give or take. Never tried 5400 RPM drives)
So you hit one bandwidth bottleneck there.
The other is the ratio of spindles to CPU.  So if you have 4 drives and 8 cores… again under load, you’ll start to see
an I/O bottleneck …

On average, how many regions do you have per table per server?

I’d consider shrinking your regions.

Sometimes you need to dial back from 11 do a more reasonable listening level… ;-)

HTH

-Mike



On Sep 8, 2014, at 8:23 AM, kiran <[hidden email]> wrote:

> Hi Lars,
>
> Ours is a problem of I/O wait and network bandwidth increase around the
> same time....
>
> Lars,
>
> Sorry to say this... our's is a production cluster and we ideally should
> never want a downtime... Also lars, we had very miserable experience while
> upgrading from 0.92 to 0.94... There was a never a mention of change in
> split policy in the release notes... and the policy was not ideal for our
> cluster and it took us atleast a week to figure out that....
>
> Our cluster runs on commodity hardware with big regions (5-10gb)... Region
> sever mem is 10gb...
> 2TB SATA Hard disks (5400 - 7200 rpm)... Internal network bandwidth is 1 gig
>
> So please suggest us any work around with 0.94.1....
>
>
> On Sun, Sep 7, 2014 at 8:42 AM, lars hofhansl <[hidden email]> wrote:
>
>> Thinking about it again, if you ran into a HBASE-7336 you'd see high CPU
>> load, but *not* IOWAIT.
>> 0.94 is at 0.94.23, you should upgrade. A lot of fixes, improvements, and
>> performance enhancements went in since 0.94.4.
>> You can do a rolling upgrade straight to 0.94.23.
>>
>> With that out of the way, can you post a jstack of the processes that
>> experience high wait times?
>>
>> -- Lars
>>
>>  ------------------------------
>> *From:* kiran <[hidden email]>
>> *To:* [hidden email]; lars hofhansl <[hidden email]>
>> *Sent:* Saturday, September 6, 2014 11:30 AM
>> *Subject:* Re: HBase - Performance issue
>>
>> Lars,
>>
>> We are facing a similar situation on the similar cluster configuration...
>> We are having high I/O wait percentages on some machines in our cluster...
>> We have short circuit reads enabled but still we are facing the similar
>> problem.. the cpu wait goes upto 50% also in some case while issuing scan
>> commands with multiple threads.. Is there a work around other than applying
>> the patch for 0.94.4 ??
>>
>> Thanks
>> Kiran
>>
>>
>> On Thu, Apr 25, 2013 at 12:12 AM, lars hofhansl <[hidden email]> wrote:
>>
>> You may have run into https://issues.apache.org/jira/browse/HBASE-7336
>> (which is in 0.94.4)
>> (Although I had not observed this effect as much when short circuit reads
>> are enabled)
>>
>>
>>
>> ----- Original Message -----
>> From: kzurek <[hidden email]>
>> To: [hidden email]
>> Cc:
>> Sent: Wednesday, April 24, 2013 3:12 AM
>> Subject: HBase - Performance issue
>>
>> The problem is that when I'm putting my data (multithreaded client, ~30MB/s
>> traffic outgoing) into the cluster the load is equally spread over all
>> RegionServer with 3.5% average CPU wait time (average CPU user: 51%). When
>> I've added similar, mutlithreaded client that Scans for, let say, 100 last
>> samples of randomly generated key from chosen time range, I'm getting high
>> CPU wait time (20% and up) on two (or more if there is higher number of
>> threads, default 10) random RegionServers. Therefore, machines that held
>> those RS are getting very hot - one of the consequences is that number of
>> store file is constantly increasing, up to the maximum limit. Rest of the
>> RS
>> are having 10-12% CPU wait time and everything seems to be OK (number of
>> store files varies so they are being compacted and not increasing over
>> time). Any ideas? Maybe  I could prioritize writes over reads somehow? Is
>> it
>> possible? If so what would be the best way to that and where it should be
>> placed - on the client or cluster side)?
>>
>> Cluster specification:
>> HBase Version    0.94.2-cdh4.2.0
>> Hadoop Version    2.0.0-cdh4.2.0
>> There are 6xDataNodes (5xHDD for storing data), 1xMasterNodes
>> Other settings:
>> - Bloom filters (ROWCOL) set
>> - Short circuit turned on
>> - HDFS Block Size: 128MB
>> - Java Heap Size of Namenode/Secondary Namenode in Bytes: 8 GiB
>> - Java Heap Size of HBase RegionServer in Bytes: 12 GiB
>> - Java Heap Size of HBase Master in Bytes: 4 GiB
>> - Java Heap Size of DataNode in Bytes: 1 GiB (default)
>> Number of regions per RegionServer: 19 (total 114 regions on 6 RS)
>> Key design: <UUID><TIMESTAMP> -> UUID: 1-10M, TIMESTAMP: 1-N
>> Table design: 1 column family with 20 columns of 8 bytes
>>
>> Get client:
>> Multiple threads
>> Each thread have its own tables instance with their Scanner.
>> Each thread have its own range of UUIDs and randomly draws beginning of
>> time
>> range to build rowkey properly (see above).
>> Each time Scan requests same amount of rows, but with random rowkey.
>>
>>
>>
>>
>>
>> --
>> View this message in context:
>> http://apache-hbase.679495.n3.nabble.com/HBase-Performance-issue-tp4042836.html
>> Sent from the HBase User mailing list archive at Nabble.com.
>>
>>
>>
>>
>> --
>> Thank you
>> Kiran Sarvabhotla
>>
>> -----Even a correct decision is wrong when it is taken late
>>
>>
>>
>>
>
>
> --
> Thank you
> Kiran Sarvabhotla
>
> -----Even a correct decision is wrong when it is taken late

Loading...