Kusabana is a proxy server between Kibana and Elasticsearch.
It was developed to cache result each query.
Kusabana is coded by Ruby 2.0, using em-proxy gem, and depending on memcached.
Elasticsearch + Kibana are becoming typical solution for data mining.
However, it is said that they have some performance problems.
Why?
I think, there are some problems about behavior of Kibana.
For example, Kibana make request to ES on almost clicking.
Futhermore, Elasticsearch doesn’t have a mechanism of ‘query cache’ but for ‘filter cache’.
These problems are seemed to be resolve by proxy that have simple caching system.
However, The query produced by kibana is variable by time, path or dashboard’s environment.
This make us have to create more functional caching.
Kusabana is a solution implemented for this problem.
Needless to say caching, Kusabana can store log of itself to Elasticsearch.
This will make you able to make configration easier.
git clone https://github.com/voyagegroup/kusabana
cd kusabana
bundle install
Because of memcached
gem, it require libsasl2-dev
or any other similer package.
proxy:
host: '0.0.0.0'
port: 9292
daemonize: false
timeout: 15
# output: 'log/kusabana.log'
# pid: 'log/kusabana.pid'
es:
remote:
host: 'localhost'
port: 9200
# output:
# index: 'kusabana-log-1'
# hosts:
# - host: 'localhost'
# port: 9200
cache:
url: 'localhost:11211'
STDOUT
./kusabana.pid
)If you want to store Kusabana’s log and set output Elasticsearch, You should PUT index template to ES.
Run
bundle rake template:create
Additionally, You can use Kibana Dashboard for montoring Kusabana’s log.
bundle rake dashboard:create
Then, the Dashboard is going to be seen at /dashboard/elasticsearch/kusabana
.
The configration of cache is available in ./bin/kusabana
.
# Default settings
rules = []
search_caching = Kusabana::Rule.new('POST', /^\/.+\/_search(\?.+)?$/, 300)
timestamp_modifier = Kusabana::QueryModifier.new(/@timestamp/) do |query|
if query.key?('from') && query.key?('to')
query['from'] = query['from'] / 100000 * 100000
query['to'] = query['to'] / 100000 * 100000
end
query
end
search_caching.add_modifier(timestamp_modifier)
rules << search_caching
rules << Kusabana::Rule.new('GET', /^\/_nodes$/, 300)
rules << Kusabana::Rule.new('GET', /^\/\S+\/_mapping$/, 300)
rules << Kusabana::Rule.new('GET', /^\/\S+\/_aliases(\?.+)?$/, 300)
Kusabana::Rule.new(method, path_pattern, expire)
Kusabana::QueryModifier.new(key_pattern, &block)
When Kusabana serve request, it will be checked whether there are any Rule
that is matched by method
and path_pattern
.
If match, Kusabana will try parsing query, scanning JSON’s key and executing $block
of matched QueryModifier
by key_pattern
.
The query will be modified by returning of &block
.
This behavior is made for ignoring tiny differents between queries(e.g. The filter by range of @timestamp
).
Each Rule
and QueryModifier
is apply only first matched one.
match
[boolean]method
[string]orig_query
[string]
mod_query
[string]
mod_query
[string]path
[string]session
[string]
cache
[string]
no
, store
, use
or error
expire
[integer]
key
[string]
METHOD::PATH::HASH
method
[string]path
[stirng]session
[string]status
[integer]took
[float]
key
[string]count
[intger]from
[date]to
[date]efficiency
[float]
[took when store]*count/expire
SO SUITABLEexpire
[integer]
avg
[float]max
[float]min
[float]sum
[float]
Not yet
Today, Kusabana is experimental product yet.
Don’t mind to send patch!
bundle exec rake test
)