[Groonga-commit] droonga/droonga.org at 7c744a2 [gh-pages] Add API documents for the next version 1.1.0

Back to archive index

YUKI Hiroshi null+****@clear*****
Sun Nov 30 23:20:40 JST 2014


YUKI Hiroshi	2014-11-30 23:20:40 +0900 (Sun, 30 Nov 2014)

  New Revision: 7c744a2f9e7829a01c3ddc11a32c046a22615156
  https://github.com/droonga/droonga.org/commit/7c744a2f9e7829a01c3ddc11a32c046a22615156

  Message:
    Add API documents for the next version 1.1.0

  Added files:
    _po/ja/reference/1.1.0/catalog/index.po
    _po/ja/reference/1.1.0/catalog/version1/index.po
    _po/ja/reference/1.1.0/catalog/version2/index.po
    _po/ja/reference/1.1.0/commands/add/index.po
    _po/ja/reference/1.1.0/commands/column-create/index.po
    _po/ja/reference/1.1.0/commands/column-list/index.po
    _po/ja/reference/1.1.0/commands/column-remove/index.po
    _po/ja/reference/1.1.0/commands/column-rename/index.po
    _po/ja/reference/1.1.0/commands/delete/index.po
    _po/ja/reference/1.1.0/commands/index.po
    _po/ja/reference/1.1.0/commands/load/index.po
    _po/ja/reference/1.1.0/commands/search/index.po
    _po/ja/reference/1.1.0/commands/select/index.po
    _po/ja/reference/1.1.0/commands/system/index.po
    _po/ja/reference/1.1.0/commands/system/status/index.po
    _po/ja/reference/1.1.0/commands/table-create/index.po
    _po/ja/reference/1.1.0/commands/table-list/index.po
    _po/ja/reference/1.1.0/commands/table-remove/index.po
    _po/ja/reference/1.1.0/http-server/index.po
    _po/ja/reference/1.1.0/index.po
    _po/ja/reference/1.1.0/message/index.po
    _po/ja/reference/1.1.0/plugin/adapter/index.po
    _po/ja/reference/1.1.0/plugin/collector/index.po
    _po/ja/reference/1.1.0/plugin/error/index.po
    _po/ja/reference/1.1.0/plugin/handler/index.po
    _po/ja/reference/1.1.0/plugin/index.po
    _po/ja/reference/1.1.0/plugin/matching-pattern/index.po
    _po/ja/tutorial/1.1.0/add-replica/index.po
    _po/ja/tutorial/1.1.0/basic/index.po
    _po/ja/tutorial/1.1.0/benchmark/index.po
    _po/ja/tutorial/1.1.0/dump-restore/index.po
    _po/ja/tutorial/1.1.0/groonga/index.po
    _po/ja/tutorial/1.1.0/index.po
    _po/ja/tutorial/1.1.0/plugin-development/adapter/index.po
    _po/ja/tutorial/1.1.0/plugin-development/handler/index.po
    _po/ja/tutorial/1.1.0/plugin-development/index.po
    _po/ja/tutorial/1.1.0/virtual-machines-for-experiments/index.po
    _po/ja/tutorial/1.1.0/watch.po
    ja/reference/1.1.0/catalog/index.md
    ja/reference/1.1.0/catalog/version1/index.md
    ja/reference/1.1.0/catalog/version2/index.md
    ja/reference/1.1.0/commands/add/index.md
    ja/reference/1.1.0/commands/column-create/index.md
    ja/reference/1.1.0/commands/column-list/index.md
    ja/reference/1.1.0/commands/column-remove/index.md
    ja/reference/1.1.0/commands/column-rename/index.md
    ja/reference/1.1.0/commands/delete/index.md
    ja/reference/1.1.0/commands/index.md
    ja/reference/1.1.0/commands/load/index.md
    ja/reference/1.1.0/commands/search/index.md
    ja/reference/1.1.0/commands/select/index.md
    ja/reference/1.1.0/commands/system/index.md
    ja/reference/1.1.0/commands/system/status/index.md
    ja/reference/1.1.0/commands/table-create/index.md
    ja/reference/1.1.0/commands/table-list/index.md
    ja/reference/1.1.0/commands/table-remove/index.md
    ja/reference/1.1.0/http-server/index.md
    ja/reference/1.1.0/index.md
    ja/reference/1.1.0/message/index.md
    ja/reference/1.1.0/plugin/adapter/index.md
    ja/reference/1.1.0/plugin/collector/index.md
    ja/reference/1.1.0/plugin/error/index.md
    ja/reference/1.1.0/plugin/handler/index.md
    ja/reference/1.1.0/plugin/index.md
    ja/reference/1.1.0/plugin/matching-pattern/index.md
    ja/tutorial/1.1.0/add-replica/index.md
    ja/tutorial/1.1.0/basic/index.md
    ja/tutorial/1.1.0/benchmark/index.md
    ja/tutorial/1.1.0/dump-restore/index.md
    ja/tutorial/1.1.0/groonga/index.md
    ja/tutorial/1.1.0/index.md
    ja/tutorial/1.1.0/plugin-development/adapter/index.md
    ja/tutorial/1.1.0/plugin-development/handler/index.md
    ja/tutorial/1.1.0/plugin-development/index.md
    ja/tutorial/1.1.0/virtual-machines-for-experiments/index.md
    ja/tutorial/1.1.0/watch.md
    reference/1.1.0/catalog/index.md
    reference/1.1.0/catalog/version1/index.md
    reference/1.1.0/catalog/version2/index.md
    reference/1.1.0/commands/add/index.md
    reference/1.1.0/commands/column-create/index.md
    reference/1.1.0/commands/column-list/index.md
    reference/1.1.0/commands/column-remove/index.md
    reference/1.1.0/commands/column-rename/index.md
    reference/1.1.0/commands/delete/index.md
    reference/1.1.0/commands/index.md
    reference/1.1.0/commands/load/index.md
    reference/1.1.0/commands/search/index.md
    reference/1.1.0/commands/select/index.md
    reference/1.1.0/commands/system/index.md
    reference/1.1.0/commands/system/status/index.md
    reference/1.1.0/commands/table-create/index.md
    reference/1.1.0/commands/table-list/index.md
    reference/1.1.0/commands/table-remove/index.md
    reference/1.1.0/http-server/index.md
    reference/1.1.0/index.md
    reference/1.1.0/message/index.md
    reference/1.1.0/plugin/adapter/index.md
    reference/1.1.0/plugin/collector/index.md
    reference/1.1.0/plugin/error/index.md
    reference/1.1.0/plugin/handler/index.md
    reference/1.1.0/plugin/index.md
    reference/1.1.0/plugin/matching-pattern/index.md
    tutorial/1.1.0/add-replica/index.md
    tutorial/1.1.0/basic/index.md
    tutorial/1.1.0/benchmark/index.md
    tutorial/1.1.0/dump-restore/index.md
    tutorial/1.1.0/groonga/index.md
    tutorial/1.1.0/index.md
    tutorial/1.1.0/plugin-development/adapter/index.md
    tutorial/1.1.0/plugin-development/handler/index.md
    tutorial/1.1.0/plugin-development/index.md
    tutorial/1.1.0/virtual-machines-for-experiments/index.md
    tutorial/1.1.0/watch.md

  Added: _po/ja/reference/1.1.0/catalog/index.po (+30 -0) 100644
===================================================================
--- /dev/null
+++ _po/ja/reference/1.1.0/catalog/index.po    2014-11-30 23:20:40 +0900 (033c851)
@@ -0,0 +1,30 @@
+msgid ""
+msgstr ""
+"Project-Id-Version: PACKAGE VERSION\n"
+"PO-Revision-Date: 2014-11-30 23:19+0900\n"
+"Language: ja\n"
+"MIME-Version: 1.0\n"
+"Content-Type: text/plain; charset=UTF-8\n"
+"Content-Transfer-Encoding: 8bit\n"
+"Plural-Forms: nplurals=1; plural=0;\n"
+
+msgid ""
+"---\n"
+"title: Catalog\n"
+"layout: en\n"
+"---"
+msgstr ""
+
+msgid ""
+"A Droonga network consists of several resources. You need to describe\n"
+"them in **catalog**. All the nodes in the network shares the same\n"
+"catalog."
+msgstr ""
+
+msgid "Catalog specification is versioned. Here are available versions:"
+msgstr ""
+
+msgid ""
+" * [version 2](version2/)\n"
+" * [version 1](version1/): (It is deprecated since 1.0.0.)"
+msgstr ""

  Added: _po/ja/reference/1.1.0/catalog/version1/index.po (+462 -0) 100644
===================================================================
--- /dev/null
+++ _po/ja/reference/1.1.0/catalog/version1/index.po    2014-11-30 23:20:40 +0900 (216a037)
@@ -0,0 +1,462 @@
+msgid ""
+msgstr ""
+"Project-Id-Version: PACKAGE VERSION\n"
+"PO-Revision-Date: 2014-11-30 23:19+0900\n"
+"Language: ja\n"
+"MIME-Version: 1.0\n"
+"Content-Type: text/plain; charset=UTF-8\n"
+"Content-Transfer-Encoding: 8bit\n"
+"Plural-Forms: nplurals=1; plural=0;\n"
+
+msgid ""
+"---\n"
+"title: Catalog\n"
+"layout: en\n"
+"---"
+msgstr ""
+
+msgid ""
+"A Droonga network consists of several resources. You need to describe\n"
+"them in **catalog**. All the nodes in the network shares the same\n"
+"catalog."
+msgstr ""
+
+msgid "This documentation describes about catalog."
+msgstr ""
+
+msgid ""
+" * TOC\n"
+"{:toc}"
+msgstr ""
+
+msgid "## How to manage"
+msgstr ""
+
+msgid ""
+"So far, you need to write catalog and share it to all the nodes\n"
+"manually."
+msgstr ""
+
+msgid ""
+"Some utility programs will generate catalog in near feature.\n"
+"Furthermore Droonga network will maintain and share catalog\n"
+"automatically."
+msgstr ""
+
+msgid "## Glossary"
+msgstr ""
+
+msgid "This section describes terms in catalog."
+msgstr ""
+
+msgid "### Catalog"
+msgstr ""
+
+msgid ""
+"Catalog is a series of data which represents the resources in the\n"
+"network."
+msgstr ""
+
+msgid "### Zone"
+msgstr ""
+
+msgid ""
+"Zone is a set of farms. Farms in a zone are expected to close to each\n"
+"other, like in the same host, in the same switch, in the same network."
+msgstr ""
+
+msgid "### Farm"
+msgstr ""
+
+msgid ""
+"A farm is a Droonga Engine instance. Droonga Engine is implemented as\n"
+"a [Fluentd][] plugin, fluent-plugin-droonga."
+msgstr ""
+
+msgid ""
+"A `fluentd` process can have multiple Droonga Engines. If you add one\n"
+"or more `match` entries with type `droonga` into `fluentd.conf`, a\n"
+"`fluentd` process instantiates one or more Droonga Engines."
+msgstr ""
+
+msgid ""
+"A farm has its own workers and a job queue. A farm push request to its\n"
+"job queue and workers pull a request from the job queue."
+msgstr ""
+
+msgid "### Dataset"
+msgstr ""
+
+msgid ""
+"Dataset is a set of logical tables. A logical table must belong to\n"
+"only one dataset."
+msgstr ""
+
+msgid "Each dataset must have an unique name in the same Droonga network."
+msgstr ""
+
+msgid "### Logical table"
+msgstr ""
+
+msgid ""
+"Logical table consists of one or more partitioned physical tables.\n"
+"Logical table doesn't have physical records. It returns physical\n"
+"records from physical tables."
+msgstr ""
+
+msgid ""
+"You can custom how to partition a logical table into one or more\n"
+"physical tables. For example, you can custom partition key, the\n"
+"number of partitions and so on."
+msgstr ""
+
+msgid "### Physical table"
+msgstr ""
+
+msgid ""
+"Physical table is a table in Groonga database. It stores physical\n"
+"records to the table."
+msgstr ""
+
+msgid "### Ring"
+msgstr ""
+
+msgid ""
+"Ring is a series of partition sets. Dataset must have one\n"
+"ring. Dataset creates logical tables on the ring."
+msgstr ""
+
+msgid ""
+"Droonga Engine replicates each record in a logical table into one or\n"
+"more partition sets."
+msgstr ""
+
+msgid "### Partition set"
+msgstr ""
+
+msgid ""
+"Partition set is a set of partitions. A partition set stores all\n"
+"records in all logical tables in the same Droonga network. In other\n"
+"words, dataset is partitioned in a partition set."
+msgstr ""
+
+msgid "A partition set is a replication of other partition set."
+msgstr ""
+
+msgid ""
+"Droonga Engine may support partitioning in one or more partition\n"
+"sets in the future. It will be useful to use different partition\n"
+"size for old data and new data. Normally, old data are smaller and\n"
+"new data are bigger. It is reasonable that you use larger partition\n"
+"size for bigger data."
+msgstr ""
+
+msgid "### Partition"
+msgstr ""
+
+msgid ""
+"Partition is a Groonga database. It has zero or more physical\n"
+"tables."
+msgstr ""
+
+msgid "### Plugin"
+msgstr ""
+
+msgid ""
+"Droonga Engine can be extended by writing plugin scripts.\n"
+"In most cases, a series of plugins work cooperatively to\n"
+"achieve required behaviors.\n"
+"So, plugins are organized by behaviors.\n"
+"Each behavior can be attached to datasets and/or tables by\n"
+"adding \"plugins\" section to the corresponding entry in the catalog."
+msgstr ""
+
+msgid ""
+"More than one plugin can be assigned in a \"plugins\" section as an array.\n"
+"The order in the array controls the execution order of plugins\n"
+"when adapting messages.\n"
+"When adapting an incoming message, plugins are applied in forward order\n"
+"whereas those are applied in reverse order when adapting an outgoing message."
+msgstr ""
+
+msgid "## Example"
+msgstr ""
+
+msgid "Consider the following case:"
+msgstr ""
+
+msgid ""
+" * There are two farms.\n"
+" * All farms (Droonga Engine instances) works on the same fluentd.\n"
+" * Each farm has two partitions.\n"
+" * There are two replicas.\n"
+" * There are two partitions for each table."
+msgstr ""
+
+msgid "Catalog is written as a JSON file. Its file name is `catalog.json`."
+msgstr ""
+
+msgid "Here is a `catalog.json` for the above case:"
+msgstr ""
+
+msgid ""
+"~~~json\n"
+"{\n"
+"  \"version\": 1,\n"
+"  \"effective_date\": \"2013-06-05T00:05:51Z\",\n"
+"  \"zones\": [\"localhost:23003/farm0\", \"localhost:23003/farm1\"],\n"
+"  \"farms\": {\n"
+"    \"localhost:23003/farm0\": {\n"
+"      \"device\": \"disk0\",\n"
+"      \"capacity\": 1024\n"
+"    },\n"
+"    \"localhost:23003/farm1\": {\n"
+"      \"device\": \"disk1\",\n"
+"      \"capacity\": 1024\n"
+"    }\n"
+"  },\n"
+"  \"datasets\": {\n"
+"    \"Wiki\": {\n"
+"      \"workers\": 4,\n"
+"      \"plugins\": [\"groonga\", \"crud\", \"search\"],\n"
+"      \"number_of_replicas\": 2,\n"
+"      \"number_of_partitions\": 2,\n"
+"      \"partition_key\": \"_key\",\n"
+"      \"date_range\": \"infinity\",\n"
+"      \"ring\": {\n"
+"        \"localhost:23004\": {\n"
+"          \"weight\": 10,\n"
+"          \"partitions\": {\n"
+"            \"2013-07-24\": [\n"
+"              \"localhost:23003/farm0.000\",\n"
+"              \"localhost:23003/farm1.000\"\n"
+"            ]\n"
+"          }\n"
+"        },\n"
+"        \"localhost:23005\": {\n"
+"          \"weight\": 10,\n"
+"          \"partitions\": {\n"
+"            \"2013-07-24\": [\n"
+"              \"localhost:23003/farm1.001\",\n"
+"              \"localhost:23003/farm0.001\"\n"
+"            ]\n"
+"          }\n"
+"        }\n"
+"      }\n"
+"    }\n"
+"  }\n"
+"}\n"
+"~~~"
+msgstr ""
+
+msgid "## Parameters"
+msgstr ""
+
+msgid "Here are descriptions about parameters in `catalog.json`."
+msgstr ""
+
+msgid "### `version` {#version}"
+msgstr ""
+
+msgid "It is a format version of the catalog file."
+msgstr ""
+
+msgid ""
+"Droonga Engine will change `catalog.json` format in the\n"
+"future. Droonga Engine can provide auto format update feature with the\n"
+"information."
+msgstr ""
+
+msgid "The value must be `1`."
+msgstr ""
+
+msgid "This is a required parameter."
+msgstr ""
+
+msgid "Example:"
+msgstr ""
+
+msgid ""
+"~~~json\n"
+"{\n"
+"  \"version\": 1\n"
+"}\n"
+"~~~"
+msgstr ""
+
+msgid "### `effective_date`"
+msgstr ""
+
+msgid ""
+"It is a date string representing the day the catalog becomes\n"
+"effective."
+msgstr ""
+
+msgid "The date string format must be [W3C-DTF][]."
+msgstr ""
+
+msgid "Note: fluent-plugin-droonga 0.8.0 doesn't use this value yet."
+msgstr ""
+
+msgid ""
+"~~~json\n"
+"{\n"
+"  \"effective_date\": \"2013-11-29T11:29:29Z\"\n"
+"}\n"
+"~~~"
+msgstr ""
+
+msgid "### `zones`"
+msgstr ""
+
+msgid ""
+"`Zones` is an array to express proximities between farms.\n"
+"Farms are grouped by a zone, and zones can be grouped by another zone recursiv"
+"ely.\n"
+"Zones make a single tree structure, expressed by nested arrays.\n"
+"Farms in a same branch are regarded as relatively closer than other farms."
+msgstr ""
+
+msgid "e.g."
+msgstr ""
+
+msgid "When the value of `zones` is as follows,"
+msgstr ""
+
+msgid ""
+"```\n"
+"[[\"A\", [\"B\", \"C\"]], \"D\"]\n"
+"```"
+msgstr ""
+
+msgid "it expresses the following tree."
+msgstr ""
+
+msgid ""
+"       /\\\n"
+"      /\\ D\n"
+"     A /\\\n"
+"      B  C"
+msgstr ""
+
+msgid ""
+"This tree means the farm \"B\" and \"C\" are closer than \"A\" or \"D\" to each other."
+"\n"
+"You should make elements in a `zones` close to each other, like in the\n"
+"same host, in the same switch, in the same network."
+msgstr ""
+
+msgid "This is an optional parameter."
+msgstr ""
+
+msgid ""
+"~~~json\n"
+"{\n"
+"  \"zones\": [\n"
+"    [\"localhost:23003/farm0\",\n"
+"     \"localhost:23003/farm1\"],\n"
+"    [\"localhost:23004/farm0\",\n"
+"     \"localhost:23004/farm1\"]\n"
+"  ]\n"
+"}\n"
+"~~~"
+msgstr ""
+
+msgid ""
+"*TODO: Discuss about the call of this parameter. This seems completely equals "
+"to the list of keys of `farms`.*"
+msgstr ""
+
+msgid "### `farms`"
+msgstr ""
+
+msgid "It is an array of Droonga Engine instances."
+msgstr ""
+
+msgid ""
+"*TODO: Improve me. For example, we have to describe relations of nested farms,"
+" ex. `children`.*"
+msgstr ""
+
+msgid ""
+"**Farms** correspond with fluent-plugin-droonga instances. A fluentd process m"
+"ay have multiple **farms** if more than one **match** entry with type **droong"
+"a** appear in the \"fluentd.conf\".\n"
+"Each **farm** has its own job queue.\n"
+"Each **farm** can attach to a data partition which is a part of a **dataset**."
+msgstr ""
+
+msgid ""
+"~~~json\n"
+"{\n"
+"  \"farms\": {\n"
+"    \"localhost:23003/farm0\": {\n"
+"      \"device\": \"/disk0\",\n"
+"      \"capacity\": 1024\n"
+"    },\n"
+"    \"localhost:23003/farm1\": {\n"
+"      \"device\": \"/disk1\",\n"
+"      \"capacity\": 1024\n"
+"    }\n"
+"  }\n"
+"}\n"
+"~~~"
+msgstr ""
+
+msgid "### `datasets`"
+msgstr ""
+
+msgid ""
+"A **dataset** is a set of **tables** which comprise a single logical **table**"
+" virtually.\n"
+"Each **dataset** must have a unique name in the network."
+msgstr ""
+
+msgid "### `ring`"
+msgstr ""
+
+msgid ""
+"`ring` is a series of partitions which comprise a dataset. `replica_count`, `n"
+"umber_of_partitons` and **time-slice** factors affect the number of partitions"
+" in a `ring`."
+msgstr ""
+
+msgid "### `workers`"
+msgstr ""
+
+msgid ""
+"`workers` is an integer number which specifies the number of worker processes "
+"to deal with the dataset.\n"
+"If `0` is specified, no worker is forked and all operations are done in the ma"
+"ster process."
+msgstr ""
+
+msgid "### `number_of_partitions`"
+msgstr ""
+
+msgid ""
+"`number_of_partition` is an integer number which represents the number of part"
+"itions divided by the hash function. The hash function which determines where "
+"each record resides the partition in a dataset is compatible with memcached."
+msgstr ""
+
+msgid "### `date_range`"
+msgstr ""
+
+msgid ""
+"`date_range` determines when to split the dataset. If a string \"infinity\" is a"
+"ssigned, dataset is never split by time factor."
+msgstr ""
+
+msgid "### `number_of_replicas`"
+msgstr ""
+
+msgid ""
+"`number_of_replicas` represents the number of replicas of dataset maintained i"
+"n the network."
+msgstr ""
+
+msgid ""
+"  [Fluentd]: http://fluentd.org/\n"
+"  [W3C-DTF]: http://www.w3.org/TR/NOTE-datetime \"Date and Time Formats\""
+msgstr ""

  Added: _po/ja/reference/1.1.0/catalog/version2/index.po (+1002 -0) 100644
===================================================================
--- /dev/null
+++ _po/ja/reference/1.1.0/catalog/version2/index.po    2014-11-30 23:20:40 +0900 (ac7dad4)
@@ -0,0 +1,1002 @@
+msgid ""
+msgstr ""
+"Project-Id-Version: PACKAGE VERSION\n"
+"PO-Revision-Date: 2014-11-30 23:19+0900\n"
+"Language: ja\n"
+"MIME-Version: 1.0\n"
+"Content-Type: text/plain; charset=UTF-8\n"
+"Content-Transfer-Encoding: 8bit\n"
+"Plural-Forms: nplurals=1; plural=0;\n"
+
+msgid ""
+"---\n"
+"title: Catalog\n"
+"layout: en\n"
+"---"
+msgstr ""
+
+msgid ""
+"* TOC\n"
+"{:toc}"
+msgstr ""
+
+msgid "## Abstract {#abstract}"
+msgstr ""
+
+msgid ""
+"`Catalog` is a JSON data to manage the configuration of a Droonga cluster.\n"
+"A Droonga cluster consists of one or more `datasets`, and a `dataset` consists"
+" of other portions. They all must be explicitly described in a `catalog` and b"
+"e shared with all the hosts in the cluster."
+msgstr ""
+
+msgid "## Usage {#usage}"
+msgstr ""
+
+msgid ""
+"This [`version`](#parameter-version) of `catalog` will be available from Droon"
+"ga 1.0.0."
+msgstr ""
+
+msgid "## Syntax {#syntax}"
+msgstr ""
+
+msgid ""
+"    {\n"
+"      \"version\": <Version number>,\n"
+"      \"effectiveDate\": \"<Effective date>\",\n"
+"      \"datasets\": {\n"
+"        \"<Name of the dataset 1>\": {\n"
+"          \"nWorkers\": <Number of workers>,\n"
+"          \"plugins\": [\n"
+"            \"Name of the plugin 1\",\n"
+"            ...\n"
+"          ],\n"
+"          \"schema\": {\n"
+"            \"<Name of the table 1>\": {\n"
+"              \"type\"             : <\"Array\", \"Hash\", \"PatriciaTrie\" or \"Double"
+"ArrayTrie\">\n"
+"              \"keyType\"          : \"<Type of the primary key>\",\n"
+"              \"tokenizer\"        : \"<Tokenizer>\",\n"
+"              \"normalizer\"       : \"<Normalizer>\",\n"
+"              \"columns\" : {\n"
+"                \"<Name of the column 1>\": {\n"
+"                  \"type\"         : <\"Scalar\", \"Vector\" or \"Index\">,\n"
+"                  \"valueType\"    : \"<Type of the value>\",\n"
+"                  \"vectorOptions\": {\n"
+"                    \"weight\"     : <Weight>,\n"
+"                  },\n"
+"                  \"indexOptions\" : {\n"
+"                    \"section\"    : <Section>,\n"
+"                    \"weight\"     : <Weight>,\n"
+"                    \"position\"   : <Position>,\n"
+"                    \"sources\"    : [\n"
+"                      \"<Name of a column to be indexed>\",\n"
+"                      ...\n"
+"                    ]\n"
+"                  }\n"
+"                },\n"
+"                \"<Name of the column 2>\": { ... },\n"
+"                ...\n"
+"              }\n"
+"            },\n"
+"            \"<Name of the table 2>\": { ... },\n"
+"            ...\n"
+"          },\n"
+"          \"fact\": \"<Name of the fact table>\",\n"
+"          \"replicas\": [\n"
+"            {\n"
+"              \"dimension\": \"<Name of the dimension column>\",\n"
+"              \"slicer\": \"<Name of the slicer function>\",\n"
+"              \"slices\": [\n"
+"                {\n"
+"                  \"label\": \"<Label of the slice>\",\n"
+"                  \"volume\": {\n"
+"                    \"address\": \"<Address string of the volume>\"\n"
+"                  }\n"
+"                },\n"
+"                ...\n"
+"              }\n"
+"            },\n"
+"            ...\n"
+"          ]\n"
+"        },\n"
+"        \"<Name of the dataset 2>\": { ... },\n"
+"        ...\n"
+"      }\n"
+"    }"
+msgstr ""
+
+msgid "## Details {#details}"
+msgstr ""
+
+msgid "### Catalog definition {#catalog}"
+msgstr ""
+
+msgid ""
+"Value\n"
+": An object with the following key/value pairs."
+msgstr ""
+
+msgid "#### Parameters"
+msgstr ""
+
+msgid "##### `version` {#parameter-version}"
+msgstr ""
+
+msgid ""
+"Abstract\n"
+": Version number of the catalog file."
+msgstr ""
+
+msgid ""
+"Value\n"
+": `2`. (Specification written in this page is valid only when this value is `2"
+"`)"
+msgstr ""
+
+msgid ""
+"Default value\n"
+": None. This is a required parameter."
+msgstr ""
+
+msgid ""
+"Inheritable\n"
+": False."
+msgstr ""
+
+msgid "##### `effectiveDate` {#parameter-effective_date}"
+msgstr ""
+
+msgid ""
+"Abstract\n"
+": The time when this catalog becomes effective."
+msgstr ""
+
+msgid ""
+"Value\n"
+": A local time string formatted in the [W3C-DTF](http://www.w3.org/TR/NOTE-dat"
+"etime \"Date and Time Formats\"), with the time zone."
+msgstr ""
+
+msgid "##### `datasets` {#parameter-datasets}"
+msgstr ""
+
+msgid ""
+"Abstract\n"
+": Definition of datasets."
+msgstr ""
+
+msgid ""
+"Value\n"
+": An object keyed by the name of the dataset with value the [`dataset` definit"
+"ion](#dataset)."
+msgstr ""
+
+msgid "##### `nWorkers` {#parameter-n_workers}"
+msgstr ""
+
+msgid ""
+"Abstract\n"
+": The number of worker processes spawned for each database instance."
+msgstr ""
+
+msgid ""
+"Value\n"
+": An integer value."
+msgstr ""
+
+msgid ""
+"Default value\n"
+": 0 (No worker. All operations are done in the master process)"
+msgstr ""
+
+msgid ""
+"Inheritable\n"
+": True. Overridable in `dataset` and `volume` definition."
+msgstr ""
+
+msgid "#### Example"
+msgstr ""
+
+msgid "A version 2 catalog effective after `2013-09-01T00:00:00Z`, with no datasets:"
+msgstr ""
+
+msgid ""
+"~~~\n"
+"{\n"
+"  \"version\": 2,\n"
+"  \"effectiveDate\": \"2013-09-01T00:00:00Z\",\n"
+"  \"datasets\": {\n"
+"  }\n"
+"}\n"
+"~~~"
+msgstr ""
+
+msgid "### Dataset definition {#dataset}"
+msgstr ""
+
+msgid "##### `plugins` {#parameter-plugins}"
+msgstr ""
+
+msgid ""
+"Abstract\n"
+": Name strings of the plugins enabled for the dataset."
+msgstr ""
+
+msgid ""
+"Value\n"
+": An array of strings."
+msgstr ""
+
+msgid "##### `schema` {#parameter-schema}"
+msgstr ""
+
+msgid ""
+"Abstract\n"
+": Definition of tables and their columns."
+msgstr ""
+
+msgid ""
+"Value\n"
+": An object keyed by the name of the table with value the [`table` definition]"
+"(#table)."
+msgstr ""
+
+msgid "##### `fact` {#parameter-fact}"
+msgstr ""
+
+msgid ""
+"Abstract\n"
+": The name of the fact table. When a `dataset` is stored as more than one `sli"
+"ce`, one [fact table](http://en.wikipedia.org/wiki/Fact_table) must be selecte"
+"d from tables defined in [`schema`](#parameter-schema) parameter."
+msgstr ""
+
+msgid ""
+"Value\n"
+": A string."
+msgstr ""
+
+msgid ""
+"Default value\n"
+": None."
+msgstr ""
+
+msgid "##### `replicas` {#parameter-replicas}"
+msgstr ""
+
+msgid ""
+"Abstract\n"
+": A collection of volumes which are the copies of each other."
+msgstr ""
+
+msgid ""
+"Value\n"
+": An array of [`volume` definitions](#volume)."
+msgstr ""
+
+msgid ""
+"A dataset with 4 workers per a database instance, with plugins `groonga`, `cru"
+"d` and `search`:"
+msgstr ""
+
+msgid ""
+"~~~\n"
+"{\n"
+"  \"nWorkers\": 4,\n"
+"  \"plugins\": [\"groonga\", \"crud\", \"search\"],\n"
+"  \"schema\": {\n"
+"  },\n"
+"  \"replicas\": [\n"
+"  ]\n"
+"}\n"
+"~~~"
+msgstr ""
+
+msgid "### Table definition {#table}"
+msgstr ""
+
+msgid "##### `type` {#parameter-table-type}"
+msgstr ""
+
+msgid ""
+"Abstract\n"
+": Specifies which data structure is used for managing keys of the table."
+msgstr ""
+
+msgid ""
+"Value\n"
+": Any of the following values."
+msgstr ""
+
+msgid ""
+"* `\"Array\"`: for tables which have no keys.\n"
+"* `\"Hash\"`: for hash tables.\n"
+"* `\"PatriciaTrie\"`: for patricia trie tables.\n"
+"* `\"DoubleArrayTrie\"`: for double array trie tables."
+msgstr ""
+
+msgid ""
+"Default value\n"
+": `\"Hash\"`"
+msgstr ""
+
+msgid "##### `keyType` {#parameter-keyType}"
+msgstr ""
+
+msgid ""
+"Abstract\n"
+": Data type of the key of the table. Mustn't be assigned when the `type` is `\""
+"Array\"`."
+msgstr ""
+
+msgid ""
+"Value\n"
+": Any of the following data types."
+msgstr ""
+
+msgid ""
+"* `\"Integer\"`       : 64bit signed integer.\n"
+"* `\"Float\"`         : 64bit floating-point number.\n"
+"* `\"Time\"`          : Time value with microseconds resolution.\n"
+"* `\"ShortText\"`     : Text value up to 4095 bytes length.\n"
+"* `\"TokyoGeoPoint\"` : Tokyo Datum based geometric point value.\n"
+"* `\"WGS84GeoPoint\"` : [WGS84](http://en.wikipedia.org/wiki/World_Geodetic_Syst"
+"em) based geometric point value."
+msgstr ""
+
+msgid ""
+"Default value\n"
+": None. Mandatory for tables with keys."
+msgstr ""
+
+msgid "##### `tokenizer` {#parameter-tokenizer}"
+msgstr ""
+
+msgid ""
+"Abstract\n"
+": Specifies the type of tokenizer used for splitting each text value, when the"
+" table is used as a lexicon. Only available when the `keyType` is `\"ShortText\""
+"`."
+msgstr ""
+
+msgid ""
+"Value\n"
+": Any of the following tokenizer names."
+msgstr ""
+
+msgid ""
+"* `\"TokenDelimit\"`\n"
+"* `\"TokenUnigram\"`\n"
+"* `\"TokenBigram\"`\n"
+"* `\"TokenTrigram\"`\n"
+"* `\"TokenBigramSplitSymbol\"`\n"
+"* `\"TokenBigramSplitSymbolAlpha\"`\n"
+"* `\"TokenBigramSplitSymbolAlphaDigit\"`\n"
+"* `\"TokenBigramIgnoreBlank\"`\n"
+"* `\"TokenBigramIgnoreBlankSplitSymbol\"`\n"
+"* `\"TokenBigramIgnoreBlankSplitSymbolAlpha\"`\n"
+"* `\"TokenBigramIgnoreBlankSplitSymbolAlphaDigit\"`\n"
+"* `\"TokenDelimitNull\"`"
+msgstr ""
+
+msgid "##### `normalizer` {#parameter-normalizer}"
+msgstr ""
+
+msgid ""
+"Abstract\n"
+": Specifies the type of normalizer which normalizes and restricts the key valu"
+"es. Only available when the `keyType` is `\"ShortText\"`."
+msgstr ""
+
+msgid ""
+"Value\n"
+": Any of the following normalizer names."
+msgstr ""
+
+msgid ""
+"* `\"NormalizerAuto\"`\n"
+"* `\"NormalizerNFKC51\"`"
+msgstr ""
+
+msgid "##### `columns` {#parameter-columns}"
+msgstr ""
+
+msgid ""
+"Abstract\n"
+": Column definition for the table."
+msgstr ""
+
+msgid ""
+"Value\n"
+": An object keyed by the name of the column with value the [`column` definitio"
+"n](#column)."
+msgstr ""
+
+msgid "#### Examples"
+msgstr ""
+
+msgid "##### Example 1: Hash table"
+msgstr ""
+
+msgid "A `Hash` table whose key is `ShortText` type, with no columns:"
+msgstr ""
+
+msgid ""
+"~~~\n"
+"{\n"
+"  \"type\": \"Hash\",\n"
+"  \"keyType\": \"ShortText\",\n"
+"  \"columns\": {\n"
+"  }\n"
+"}\n"
+"~~~"
+msgstr ""
+
+msgid "##### Example 2: PatriciaTrie table"
+msgstr ""
+
+msgid ""
+"A `PatriciaTrie` table with `TokenBigram` tokenizer and `NormalizerAuto` norma"
+"lizer, with no columns:"
+msgstr ""
+
+msgid ""
+"~~~\n"
+"{\n"
+"  \"type\": \"PatriciaTrie\",\n"
+"  \"keyType\": \"ShortText\",\n"
+"  \"tokenizer\": \"TokenBigram\",\n"
+"  \"normalizer\": \"NormalizerAuto\",\n"
+"  \"columns\": {\n"
+"  }\n"
+"}\n"
+"~~~"
+msgstr ""
+
+msgid "### Column definition {#column}"
+msgstr ""
+
+msgid "Value"
+msgstr ""
+
+msgid ": An object with the following key/value pairs."
+msgstr ""
+
+msgid "##### `type` {#parameter-column-type}"
+msgstr ""
+
+msgid ""
+"Abstract\n"
+": Specifies the quantity of data stored as each column value."
+msgstr ""
+
+msgid ""
+"Value\n"
+": Any of the followings."
+msgstr ""
+
+msgid ""
+"* `\"Scalar\"`: A single value.\n"
+"* `\"Vector\"`: A list of values.\n"
+"* `\"Index\"` : A set of unique values with additional properties respectively. "
+"Properties can be specified in [`indexOptions`](#parameter-indexOptions)."
+msgstr ""
+
+msgid ""
+"Default value\n"
+": `\"Scalar\"`"
+msgstr ""
+
+msgid "##### `valueType` {#parameter-valueType}"
+msgstr ""
+
+msgid ""
+"Abstract\n"
+": Data type of the column value."
+msgstr ""
+
+msgid ""
+"Value\n"
+": Any of the following data types or the name of another table defined in the "
+"same dataset. When a table name is assigned, the column acts as a foreign key "
+"references the table."
+msgstr ""
+
+msgid ""
+"* `\"Bool\"`          : `true` or `false`.\n"
+"* `\"Integer\"`       : 64bit signed integer.\n"
+"* `\"Float\"`         : 64bit floating-point number.\n"
+"* `\"Time\"`          : Time value with microseconds resolution.\n"
+"* `\"ShortText\"`     : Text value up to 4,095 bytes length.\n"
+"* `\"Text\"`          : Text value up to 2,147,483,647 bytes length.\n"
+"* `\"TokyoGeoPoint\"` : Tokyo Datum based geometric point value.\n"
+"* `\"WGS84GeoPoint\"` : [WGS84](http://en.wikipedia.org/wiki/World_Geodetic_Syst"
+"em) based geometric point value."
+msgstr ""
+
+msgid "##### `vectorOptions` {#parameter-vectorOptions}"
+msgstr ""
+
+msgid ""
+"Abstract\n"
+": Specifies the optional properties of a \"Vector\" column."
+msgstr ""
+
+msgid ""
+"Value\n"
+": An object which is a [`vectorOptions` definition](#vectorOptions)"
+msgstr ""
+
+msgid ""
+"Default value\n"
+": `{}` (Void object)."
+msgstr ""
+
+msgid "##### `indexOptions` {#parameter-indexOptions}"
+msgstr ""
+
+msgid ""
+"Abstract\n"
+": Specifies the optional properties of an \"Index\" column."
+msgstr ""
+
+msgid ""
+"Value\n"
+": An object which is an [`indexOptions` definition](#indexOptions)"
+msgstr ""
+
+msgid "##### Example 1: Scalar column"
+msgstr ""
+
+msgid "A scaler column to store `ShortText` values:"
+msgstr ""
+
+msgid ""
+"~~~\n"
+"{\n"
+"  \"type\": \"Scalar\",\n"
+"  \"valueType\": \"ShortText\"\n"
+"}\n"
+"~~~"
+msgstr ""
+
+msgid "##### Example 2: Vector column"
+msgstr ""
+
+msgid "A vector column to store `ShortText` values with weight:"
+msgstr ""
+
+msgid ""
+"~~~\n"
+"{\n"
+"  \"type\": \"Scalar\",\n"
+"  \"valueType\": \"ShortText\",\n"
+"  \"vectorOptions\": {\n"
+"    \"weight\": true\n"
+"  }\n"
+"}\n"
+"~~~"
+msgstr ""
+
+msgid "##### Example 3: Index column"
+msgstr ""
+
+msgid "A column to index `address` column on `Store` table:"
+msgstr ""
+
+msgid ""
+"~~~\n"
+"{\n"
+"  \"type\": \"Index\",\n"
+"  \"valueType\": \"Store\",\n"
+"  \"indexOptions\": {\n"
+"    \"sources\": [\n"
+"      \"address\"\n"
+"    ]\n"
+"  }\n"
+"}\n"
+"~~~"
+msgstr ""
+
+msgid "### vectorOptions definition {#vectorOptions}"
+msgstr ""
+
+msgid "##### `weight` {#parameter-vectorOptions-weight}"
+msgstr ""
+
+msgid ""
+"Abstract\n"
+": Specifies whether the vector column stores the weight data or not. Weight da"
+"ta is used for indicating the importance of the value."
+msgstr ""
+
+msgid ""
+"Value\n"
+": A boolean value (`true` or `false`)."
+msgstr ""
+
+msgid ""
+"Default value\n"
+": `false`."
+msgstr ""
+
+msgid "Store the weight data."
+msgstr ""
+
+msgid ""
+"~~~\n"
+"{\n"
+"  \"weight\": true\n"
+"}\n"
+"~~~"
+msgstr ""
+
+msgid "### indexOptions definition {#indexOptions}"
+msgstr ""
+
+msgid "##### `section` {#parameter-indexOptions-section}"
+msgstr ""
+
+msgid ""
+"Abstract\n"
+": Specifies whether the index column stores the section data or not. Section d"
+"ata is typically used for distinguishing in which part of the sources the valu"
+"e appears."
+msgstr ""
+
+msgid "##### `weight` {#parameter-indexOptions-weight}"
+msgstr ""
+
+msgid ""
+"Abstract\n"
+": Specifies whether the index column stores the weight data or not. Weight dat"
+"a is used for indicating the importance of the value in the sources."
+msgstr ""
+
+msgid "##### `position` {#parameter-indexOptions-position}"
+msgstr ""
+
+msgid ""
+"Abstract\n"
+": Specifies whether the index column stores the position data or not. Position"
+" data is used for specifying the position where the value appears in the sourc"
+"es. It is indispensable for fast and accurate phrase-search."
+msgstr ""
+
+msgid "##### `sources` {#parameter-indexOptions-sources}"
+msgstr ""
+
+msgid ""
+"Abstract\n"
+": Makes the column an inverted index of the referencing table's columns."
+msgstr ""
+
+msgid ""
+"Value\n"
+": An array of column names of the referencing table assigned as [`valueType`]("
+"#parameter-valueType)."
+msgstr ""
+
+msgid ""
+"Store the section data, the weight data and the position data.\n"
+"Index `name` and `address` on the referencing table."
+msgstr ""
+
+msgid ""
+"~~~\n"
+"{\n"
+"  \"section\": true,\n"
+"  \"weight\": true,\n"
+"  \"position\": true\n"
+"  \"sources\": [\n"
+"    \"name\",\n"
+"    \"address\"\n"
+"  ]\n"
+"}\n"
+"~~~"
+msgstr ""
+
+msgid "### Volume definition {#volume}"
+msgstr ""
+
+msgid ""
+"Abstract\n"
+": A unit to compose a dataset. A dataset consists of one or more volumes. A vo"
+"lume consists of either a single instance of database or a collection of `slic"
+"es`. When a volume consists of a single database instance, `address` parameter"
+" must be assigned and the other parameters must not be assigned. Otherwise, `d"
+"imension`, `slicer` and `slices` are required, and vice versa."
+msgstr ""
+
+msgid "##### `address` {#parameter-address}"
+msgstr ""
+
+msgid ""
+"Abstract\n"
+": Specifies the location of the database instance."
+msgstr ""
+
+msgid ""
+"Value\n"
+": A string in the following format."
+msgstr ""
+
+msgid "      ${host_name}:${port_number}/${tag}.${name}"
+msgstr ""
+
+msgid ""
+"  * `host_name`: The name of host that has the database instance.\n"
+"  * `port_number`: The port number for the database instance.\n"
+"  * `tag`: The tag of the database instance. The tag name can't include `.`. Y"
+"ou can use multiple tags for one host name and port number pair.\n"
+"  * `name`: The name of the databases instance. You can use multiple names for"
+" one host name, port number and tag triplet."
+msgstr ""
+
+msgid "##### `dimension` {#parameter-dimension}"
+msgstr ""
+
+msgid ""
+"Abstract\n"
+": Specifies the dimension to slice the records in the fact table. Either '_key"
+"\" or a scalar type column can be selected from [`columns`](#parameter-columns)"
+" parameter of the fact table. See [dimension](http://en.wikipedia.org/wiki/Dim"
+"ension_%28data_warehouse%29)."
+msgstr ""
+
+msgid ""
+"Default value\n"
+": `\"_key\"`"
+msgstr ""
+
+msgid "##### `slicer` {#parameter-slicer}"
+msgstr ""
+
+msgid ""
+"Abstract\n"
+": Function to slice the value of dimension column."
+msgstr ""
+
+msgid ""
+"Value\n"
+": Name of slicer function."
+msgstr ""
+
+msgid ""
+"Default value\n"
+": `\"hash\"`"
+msgstr ""
+
+msgid ""
+"In order to define a volume which consists of a collection of `slices`,\n"
+"the way how slice records into slices must be decided."
+msgstr ""
+
+msgid ""
+"The slicer function that specified as `slicer` and\n"
+"the column (or key) specified as `dimension`,\n"
+"which is input for the slicer function, defines that."
+msgstr ""
+
+msgid "Slicers are categorized into three types. Here are three types of slicers:"
+msgstr ""
+
+msgid ""
+"Ratio-scaled\n"
+": *Ratio-scaled slicers* slice datapoints in the specified ratio,\n"
+"  e.g. hash function of _key.\n"
+"  Slicers of this type are:"
+msgstr ""
+
+msgid "  * `hash`"
+msgstr ""
+
+msgid ""
+"Ordinal-scaled\n"
+": *Ordinal-scaled slicers* slice datapoints with ordinal values;\n"
+"  the values have some ranking, e.g. time, integer,\n"
+"  element of `{High, Middle, Low}`.\n"
+"  Slicers of this type are:"
+msgstr ""
+
+msgid "  * (not implemented yet)"
+msgstr ""
+
+msgid ""
+"Nominal-scaled\n"
+": *Nominal-scaled slicers* slice datapoints with nominal values;\n"
+"  the values denotes categories,which have no order,\n"
+"  e.g. country, zip code, color.\n"
+"  Slicers of this type are:"
+msgstr ""
+
+msgid "##### `slices` {#parameter-slices}"
+msgstr ""
+
+msgid ""
+"Abstract\n"
+": Definition of slices which store the contents of the data."
+msgstr ""
+
+msgid ""
+"Value\n"
+": An array of [`slice` definitions](#slice)."
+msgstr ""
+
+msgid "##### Example 1: Single instance"
+msgstr ""
+
+msgid "A volume at \"localhost:24224/volume.000\":"
+msgstr ""
+
+msgid ""
+"~~~\n"
+"{\n"
+"  \"address\": \"localhost:24224/volume.000\"\n"
+"}\n"
+"~~~"
+msgstr ""
+
+msgid "##### Example 2: Slices"
+msgstr ""
+
+msgid ""
+"A volume that consists of three slices, records are to be distributed accordin"
+"g to `hash`,\n"
+"which is ratio-scaled slicer function, of `_key`."
+msgstr ""
+
+msgid ""
+"~~~\n"
+"{\n"
+"  \"dimension\": \"_key\",\n"
+"  \"slicer\": \"hash\",\n"
+"  \"slices\": [\n"
+"    {\n"
+"      \"volume\": {\n"
+"        \"address\": \"localhost:24224/volume.000\"\n"
+"      }\n"
+"    },\n"
+"    {\n"
+"      \"volume\": {\n"
+"        \"address\": \"localhost:24224/volume.001\"\n"
+"      }\n"
+"    },\n"
+"    {\n"
+"      \"volume\": {\n"
+"        \"address\": \"localhost:24224/volume.002\"\n"
+"      }\n"
+"    }\n"
+"  ]\n"
+"~~~"
+msgstr ""
+
+msgid "### Slice definition {#slice}"
+msgstr ""
+
+msgid ""
+"Abstract\n"
+": Definition of each slice. Specifies the range of sliced data and the volume "
+"to store the data."
+msgstr ""
+
+msgid "##### `weight` {#parameter-slice-weight}"
+msgstr ""
+
+msgid ""
+"Abstract\n"
+": Specifies the share in the slices. Only available when the `slicer` is ratio"
+"-scaled."
+msgstr ""
+
+msgid ""
+"Value\n"
+": A numeric value."
+msgstr ""
+
+msgid ""
+"Default value\n"
+": `1`."
+msgstr ""
+
+msgid "##### `label` {#parameter-label}"
+msgstr ""
+
+msgid ""
+"Abstract\n"
+": Specifies the concrete value that slicer may return. Only available when the"
+" slicer is nominal-scaled."
+msgstr ""
+
+msgid ""
+"Value\n"
+": A value of the dimension column data type. When the value is not provided, t"
+"his slice is regarded as *else*; matched only if all other labels are not matc"
+"hed. Therefore, only one slice without `label` is allowed in slices."
+msgstr ""
+
+msgid "##### `boundary` {#parameter-boundary}"
+msgstr ""
+
+msgid ""
+"Abstract\n"
+": Specifies the concrete value that can compare with `slicer`'s return value. "
+"Only available when the `slicer` is ordinal-scaled."
+msgstr ""
+
+msgid ""
+"Value\n"
+": A value of the dimension column data type. When the value is not provided, t"
+"his slice is regarded as *else*; this means the slice is open-ended. Therefore"
+", only one slice without `boundary` is allowed in a slices."
+msgstr ""
+
+msgid "##### `volume` {#parameter-volume}"
+msgstr ""
+
+msgid ""
+"Abstract\n"
+": A volume to store the data which corresponds to the slice."
+msgstr ""
+
+msgid ": An object which is a [`volume` definition](#volume)"
+msgstr ""
+
+msgid "##### Example 1: Ratio-scaled"
+msgstr ""
+
+msgid "Slice for a ratio-scaled slicer, with the weight `1`:"
+msgstr ""
+
+msgid ""
+"~~~\n"
+"{\n"
+"  \"weight\": 1,\n"
+"  \"volume\": {\n"
+"  }\n"
+"}\n"
+"~~~"
+msgstr ""
+
+msgid "##### Example 2: Nominal-scaled"
+msgstr ""
+
+msgid "Slice for a nominal-scaled slicer, with the label `\"1\"`:"
+msgstr ""
+
+msgid ""
+"~~~\n"
+"{\n"
+"  \"label\": \"1\",\n"
+"  \"volume\": {\n"
+"  }\n"
+"}\n"
+"~~~"
+msgstr ""
+
+msgid "##### Example 3: Ordinal-scaled"
+msgstr ""
+
+msgid "Slice for a ordinal-scaled slicer, with the boundary `100`:"
+msgstr ""
+
+msgid ""
+"~~~\n"
+"{\n"
+"  \"boundary\": 100,\n"
+"  \"volume\": {\n"
+"  }\n"
+"}\n"
+"~~~"
+msgstr ""
+
+msgid "## Realworld example"
+msgstr ""
+
+msgid "See the catalog of [basic tutorial]."
+msgstr ""
+
+msgid "  [basic tutorial]: ../../../tutorial/basic"
+msgstr ""

  Added: _po/ja/reference/1.1.0/commands/add/index.po (+390 -0) 100644
===================================================================
--- /dev/null
+++ _po/ja/reference/1.1.0/commands/add/index.po    2014-11-30 23:20:40 +0900 (2abbf2e)
@@ -0,0 +1,390 @@
+msgid ""
+msgstr ""
+"Project-Id-Version: PACKAGE VERSION\n"
+"PO-Revision-Date: 2014-11-30 23:19+0900\n"
+"Language: ja\n"
+"MIME-Version: 1.0\n"
+"Content-Type: text/plain; charset=UTF-8\n"
+"Content-Transfer-Encoding: 8bit\n"
+"Plural-Forms: nplurals=1; plural=0;\n"
+
+msgid ""
+"---\n"
+"title: add\n"
+"layout: en\n"
+"---"
+msgstr ""
+
+msgid ""
+"* TOC\n"
+"{:toc}"
+msgstr ""
+
+msgid "## Abstract {#abstract}"
+msgstr ""
+
+msgid ""
+"The `add` command adds a new record to the specified table. Column values of t"
+"he existing record are updated by given values, if the table has a primary key"
+" and there is existing record with the specified key."
+msgstr ""
+
+msgid "## API types {#api-types}"
+msgstr ""
+
+msgid "### HTTP {#api-types-http}"
+msgstr ""
+
+msgid ""
+"Request endpoint\n"
+": `(Document Root)/droonga/add`"
+msgstr ""
+
+msgid ""
+"Request methd\n"
+": `POST`"
+msgstr ""
+
+msgid ""
+"Request URL parameters\n"
+": Nothing."
+msgstr ""
+
+msgid ""
+"Request body\n"
+": A hash of [parameters](#parameters)."
+msgstr ""
+
+msgid ""
+"Response body\n"
+": A [response message](#response)."
+msgstr ""
+
+msgid "### REST {#api-types-rest}"
+msgstr ""
+
+msgid "Not supported."
+msgstr ""
+
+msgid "### Fluentd {#api-types-fluentd}"
+msgstr ""
+
+msgid ""
+"Style\n"
+": Request-Response. One response message is always returned per one request."
+msgstr ""
+
+msgid ""
+"`type` of the request\n"
+": `add`"
+msgstr ""
+
+msgid ""
+"`body` of the request\n"
+": A hash of [parameters](#parameters)."
+msgstr ""
+
+msgid ""
+"`type` of the response\n"
+": `add.result`"
+msgstr ""
+
+msgid "## Parameter syntax {#syntax}"
+msgstr ""
+
+msgid "If the table has a primary key column:"
+msgstr ""
+
+msgid ""
+"    {\n"
+"      \"table\"  : \"<Name of the table>\",\n"
+"      \"key\"    : \"<The primary key of the record>\",\n"
+"      \"values\" : {\n"
+"        \"<Name of the column 1>\" : <value 1>,\n"
+"        \"<Name of the column 2>\" : <value 2>,\n"
+"        ...\n"
+"      }\n"
+"    }"
+msgstr ""
+
+msgid "If the table has no primary key column:"
+msgstr ""
+
+msgid ""
+"    {\n"
+"      \"table\"  : \"<Name of the table>\",\n"
+"      \"values\" : {\n"
+"        \"<Name of the column 1>\" : <value 1>,\n"
+"        \"<Name of the column 2>\" : <value 2>,\n"
+"        ...\n"
+"      }\n"
+"    }"
+msgstr ""
+
+msgid "## Usage {#usage}"
+msgstr ""
+
+msgid ""
+"This section describes how to use the `add` command, via a typical usage with "
+"following two tables:"
+msgstr ""
+
+msgid "Person table (without primary key):"
+msgstr ""
+
+msgid ""
+"|name|job (referring the Job table)|\n"
+"|Alice Arnold|announcer|\n"
+"|Alice Cooper|musician|"
+msgstr ""
+
+msgid "Job table (with primary key)"
+msgstr ""
+
+msgid ""
+"|_key|label|\n"
+"|announcer|announcer|\n"
+"|musician|musician|"
+msgstr ""
+
+msgid ""
+"### Adding a new record to a table without primary key {#adding-record-to-tabl"
+"e-without-key}"
+msgstr ""
+
+msgid ""
+"Specify only `table` and `values`, without `key`, if the table has no primary "
+"key."
+msgstr ""
+
+msgid ""
+"    {\n"
+"      \"type\" : \"add\",\n"
+"      \"body\" : {\n"
+"        \"table\"  : \"Person\",\n"
+"        \"values\" : {\n"
+"          \"name\" : \"Bob Dylan\",\n"
+"          \"job\"  : \"musician\"\n"
+"        }\n"
+"      }\n"
+"    }"
+msgstr ""
+
+msgid ""
+"    => {\n"
+"         \"type\" : \"add.result\",\n"
+"         \"body\" : true\n"
+"       }"
+msgstr ""
+
+msgid ""
+"The `add` command works recursively. If there is no existing record with the k"
+"ey in the referred table, then it is also automatically added silently so you'"
+"ll see no error response. For example this will add a new Person record with a"
+" new Job record named `doctor`."
+msgstr ""
+
+msgid ""
+"    {\n"
+"      \"type\" : \"add\",\n"
+"      \"body\" : {\n"
+"        \"table\"  : \"Person\",\n"
+"        \"values\" : {\n"
+"          \"name\" : \"Alice Miller\",\n"
+"          \"job\"  : \"doctor\"\n"
+"        }\n"
+"      }\n"
+"    }"
+msgstr ""
+
+msgid ""
+"By the command above, a new record will be automatically added to the Job tabl"
+"e like;"
+msgstr ""
+
+msgid ""
+"|_key|label|\n"
+"|announcer|announcer|\n"
+"|musician|musician|\n"
+"|doctor|(blank)|"
+msgstr ""
+
+msgid ""
+"### Adding a new record to a table with primary key {#adding-record-to-table-w"
+"ith-key}"
+msgstr ""
+
+msgid ""
+"Specify all parameters `table`, `values` and `key`, if the table has a primary"
+" key column."
+msgstr ""
+
+msgid ""
+"    {\n"
+"      \"type\" : \"add\",\n"
+"      \"body\" : {\n"
+"        \"table\"  : \"Job\",\n"
+"        \"key\"    : \"writer\",\n"
+"        \"values\" : {\n"
+"          \"label\" : \"writer\"\n"
+"        }\n"
+"      }\n"
+"    }"
+msgstr ""
+
+msgid "### Updating column values of an existing record {#updating}"
+msgstr ""
+
+msgid ""
+"This command works as \"updating\" operation, if the table has a primary key col"
+"umn and there is an existing record for the specified key."
+msgstr ""
+
+msgid ""
+"    {\n"
+"      \"type\" : \"add\",\n"
+"      \"body\" : {\n"
+"        \"table\"  : \"Job\",\n"
+"        \"key\"    : \"doctor\",\n"
+"        \"values\" : {\n"
+"          \"label\" : \"doctor\"\n"
+"        }\n"
+"      }\n"
+"    }"
+msgstr ""
+
+msgid ""
+"You cannot update column values of existing records, if the table has no prima"
+"ry key column. Then this command will always work as \"adding\" operation for th"
+"e table."
+msgstr ""
+
+msgid "## Parameter details {#parameters}"
+msgstr ""
+
+msgid "### `table` {#parameter-table}"
+msgstr ""
+
+msgid ""
+"Abstract\n"
+": The name of a table which a record is going to be added to."
+msgstr ""
+
+msgid ""
+"Value\n"
+": A name string of an existing table."
+msgstr ""
+
+msgid ""
+"Default value\n"
+": Nothing. This is a required parameter."
+msgstr ""
+
+msgid "### `key` {#parameter-key}"
+msgstr ""
+
+msgid ""
+"Abstract\n"
+": The primary key for the record going to be added."
+msgstr ""
+
+msgid ""
+"Value\n"
+": A primary key string."
+msgstr ""
+
+msgid ""
+"Default value\n"
+": Nothing. This is required if the table has a primary key column. Otherwise, "
+"this is ignored."
+msgstr ""
+
+msgid ""
+"Existing column values will be updated, if there is an existing record for the"
+" key."
+msgstr ""
+
+msgid "This parameter will be ignored if the table has no primary key column."
+msgstr ""
+
+msgid "### `values` {#parameter-values}"
+msgstr ""
+
+msgid ""
+"Abstract\n"
+": New values for columns of the record."
+msgstr ""
+
+msgid ""
+"Value\n"
+": A hash. Keys of the hash are column names, values of the hash are new values"
+" for each column."
+msgstr ""
+
+msgid ""
+"Default value\n"
+": `null`"
+msgstr ""
+
+msgid "Value of unspecified columns will not be changed."
+msgstr ""
+
+msgid "## Responses {#response}"
+msgstr ""
+
+msgid ""
+"This returns a boolean value `true` like following as the response's `body`, w"
+"ith `200` as its `statusCode`, if a record is successfully added or updated."
+msgstr ""
+
+msgid "    true"
+msgstr ""
+
+msgid "## Error types {#errors}"
+msgstr ""
+
+msgid ""
+"This command reports errors not only [general errors](/reference/message/#erro"
+"r) but also followings."
+msgstr ""
+
+msgid "### `MissingTableParameter`"
+msgstr ""
+
+msgid ""
+"Means you've forgotten to specify the `table` parameter. The status code is `4"
+"00`."
+msgstr ""
+
+msgid "### `MissingPrimaryKeyParameter`"
+msgstr ""
+
+msgid ""
+"Means you've forgotten to specify the `key` parameter, for a table with the pr"
+"imary key column. The status code is `400`."
+msgstr ""
+
+msgid "### `InvalidValue`"
+msgstr ""
+
+msgid ""
+"Means you've specified an invalid value for a column. For example, a string fo"
+"r a geolocation column, a string for an integer column, etc. The status code i"
+"s `400`."
+msgstr ""
+
+msgid "### `UnknownTable`"
+msgstr ""
+
+msgid ""
+"Means you've specified a table which is not existing in the specified dataset."
+" The status code is `404`."
+msgstr ""
+
+msgid "### `UnknownColumn`"
+msgstr ""
+
+msgid ""
+"Means you've specified any column which is not existing in the specified table"
+". The status code is `404`."
+msgstr ""

  Added: _po/ja/reference/1.1.0/commands/column-create/index.po (+176 -0) 100644
===================================================================
--- /dev/null
+++ _po/ja/reference/1.1.0/commands/column-create/index.po    2014-11-30 23:20:40 +0900 (f2d6f90)
@@ -0,0 +1,176 @@
+msgid ""
+msgstr ""
+"Project-Id-Version: PACKAGE VERSION\n"
+"PO-Revision-Date: 2014-11-30 23:19+0900\n"
+"Language: ja\n"
+"MIME-Version: 1.0\n"
+"Content-Type: text/plain; charset=UTF-8\n"
+"Content-Transfer-Encoding: 8bit\n"
+"Plural-Forms: nplurals=1; plural=0;\n"
+
+msgid ""
+"---\n"
+"title: column_create\n"
+"layout: en\n"
+"---"
+msgstr ""
+
+msgid ""
+"* TOC\n"
+"{:toc}"
+msgstr ""
+
+msgid "## Abstract {#abstract}"
+msgstr ""
+
+msgid "The `column_create` command creates a new column into the specified table."
+msgstr ""
+
+msgid ""
+"This is compatible to [the `column_create` command of the Groonga](http://groo"
+"nga.org/docs/reference/commands/column_create.html)."
+msgstr ""
+
+msgid "## API types {#api-types}"
+msgstr ""
+
+msgid "### HTTP {#api-types-http}"
+msgstr ""
+
+msgid ""
+"Request endpoint\n"
+": `(Document Root)/d/column_create`"
+msgstr ""
+
+msgid ""
+"Request methd\n"
+": `GET`"
+msgstr ""
+
+msgid ""
+"Request URL parameters\n"
+": Same to the list of [parameters](#parameters)."
+msgstr ""
+
+msgid ""
+"Request body\n"
+": Nothing."
+msgstr ""
+
+msgid ""
+"Response body\n"
+": A [response message](#response)."
+msgstr ""
+
+msgid "### REST {#api-types-rest}"
+msgstr ""
+
+msgid "Not supported."
+msgstr ""
+
+msgid "### Fluentd {#api-types-fluentd}"
+msgstr ""
+
+msgid ""
+"Style\n"
+": Request-Response. One response message is always returned per one request."
+msgstr ""
+
+msgid ""
+"`type` of the request\n"
+": `column_create`"
+msgstr ""
+
+msgid ""
+"`body` of the request\n"
+": A hash of [parameters](#parameters)."
+msgstr ""
+
+msgid ""
+"`type` of the response\n"
+": `column_create.result`"
+msgstr ""
+
+msgid "## Parameter syntax {#syntax}"
+msgstr ""
+
+msgid ""
+"    {\n"
+"      \"table\"  : \"<Name of the table>\",\n"
+"      \"name\"   : \"<Name of the column>\",\n"
+"      \"flags\"  : \"<Flags for the column>\",\n"
+"      \"type\"   : \"<Type of the value>\",\n"
+"      \"source\" : \"<Name of a column to be indexed>\"\n"
+"    }"
+msgstr ""
+
+msgid "## Parameter details {#parameters}"
+msgstr ""
+
+msgid "All parameters except `table` and `name` are optional."
+msgstr ""
+
+msgid ""
+"They are compatible to [the parameters of the `column_create` command of the G"
+"roonga](http://groonga.org/docs/reference/commands/column_create.html#paramete"
+"rs). See the linked document for more details."
+msgstr ""
+
+msgid "## Responses {#response}"
+msgstr ""
+
+msgid "This returns an array meaning the result of the operation, as the `body`."
+msgstr ""
+
+msgid ""
+"    [\n"
+"      [\n"
+"        <Groonga's status code>,\n"
+"        <Start time>,\n"
+"        <Elapsed time>\n"
+"      ],\n"
+"      <Column is successfully created or not>\n"
+"    ]"
+msgstr ""
+
+msgid ""
+"This command always returns a response with `200` as its `statusCode`, because"
+" this is a Groonga compatible command and errors of this command must be handl"
+"ed in the way same to Groonga's one."
+msgstr ""
+
+msgid "Response body's details:"
+msgstr ""
+
+msgid ""
+"Status code\n"
+": An integer meaning the operation's result. Possible values are:"
+msgstr ""
+
+msgid ""
+"   * `0` (`Droonga::GroongaHandler::Status::SUCCESS`) : Successfully processed"
+".\n"
+"   * `-22` (`Droonga::GroongaHandler::Status::INVALID_ARGUMENT`) : There is an"
+"y invalid argument."
+msgstr ""
+
+msgid ""
+"Start time\n"
+": An UNIX time which the operation was started on."
+msgstr ""
+
+msgid ""
+"Elapsed time\n"
+": A decimal of seconds meaning the elapsed time for the operation."
+msgstr ""
+
+msgid ""
+"Column is successfully created or not\n"
+": A boolean value meaning the column was successfully created or not. Possible"
+" values are:"
+msgstr ""
+
+msgid ""
+"   * `true`:The column was successfully created.\n"
+"   * `false`:The column was not created."
+msgstr ""

  Added: _po/ja/reference/1.1.0/commands/column-list/index.po (+167 -0) 100644
===================================================================
--- /dev/null
+++ _po/ja/reference/1.1.0/commands/column-list/index.po    2014-11-30 23:20:40 +0900 (07c0501)
@@ -0,0 +1,167 @@
+msgid ""
+msgstr ""
+"Project-Id-Version: PACKAGE VERSION\n"
+"PO-Revision-Date: 2014-11-30 23:19+0900\n"
+"Language: ja\n"
+"MIME-Version: 1.0\n"
+"Content-Type: text/plain; charset=UTF-8\n"
+"Content-Transfer-Encoding: 8bit\n"
+"Plural-Forms: nplurals=1; plural=0;\n"
+
+msgid ""
+"---\n"
+"title: column_list\n"
+"layout: en\n"
+"---"
+msgstr ""
+
+msgid ""
+"* TOC\n"
+"{:toc}"
+msgstr ""
+
+msgid "## Abstract {#abstract}"
+msgstr ""
+
+msgid "The `column_list` command reports the list of all existing columns in a table."
+msgstr ""
+
+msgid ""
+"This is compatible to [the `column_list` command of the Groonga](http://groong"
+"a.org/docs/reference/commands/column_list.html)."
+msgstr ""
+
+msgid "## API types {#api-types}"
+msgstr ""
+
+msgid "### HTTP {#api-types-http}"
+msgstr ""
+
+msgid ""
+"Request endpoint\n"
+": `(Document Root)/d/column_list`"
+msgstr ""
+
+msgid ""
+"Request methd\n"
+": `GET`"
+msgstr ""
+
+msgid ""
+"Request URL parameters\n"
+": Same to the list of [parameters](#parameters)."
+msgstr ""
+
+msgid ""
+"Request body\n"
+": Nothing."
+msgstr ""
+
+msgid ""
+"Response body\n"
+": A [response message](#response)."
+msgstr ""
+
+msgid "### REST {#api-types-rest}"
+msgstr ""
+
+msgid "Not supported."
+msgstr ""
+
+msgid "### Fluentd {#api-types-fluentd}"
+msgstr ""
+
+msgid ""
+"Style\n"
+": Request-Response. One response message is always returned per one request."
+msgstr ""
+
+msgid ""
+"`type` of the request\n"
+": `column_list`"
+msgstr ""
+
+msgid ""
+"`body` of the request\n"
+": A hash of [parameters](#parameters)."
+msgstr ""
+
+msgid ""
+"`type` of the response\n"
+": `column_list.result`"
+msgstr ""
+
+msgid "## Parameter syntax {#syntax}"
+msgstr ""
+
+msgid ""
+"    {\n"
+"      \"table\" : \"<Name of the table>\"\n"
+"    }"
+msgstr ""
+
+msgid "## Parameter details {#parameters}"
+msgstr ""
+
+msgid "The only one parameter `table` is required."
+msgstr ""
+
+msgid ""
+"They are compatible to [the parameters of the `column_list` command of the Gro"
+"onga](http://groonga.org/docs/reference/commands/column_list.html#parameters)."
+" See the linked document for more details."
+msgstr ""
+
+msgid "## Responses {#response}"
+msgstr ""
+
+msgid "This returns an array meaning the result of the operation, as the `body`."
+msgstr ""
+
+msgid ""
+"    [\n"
+"      [\n"
+"        <Groonga's status code>,\n"
+"        <Start time>,\n"
+"        <Elapsed time>\n"
+"      ],\n"
+"      <List of columns>\n"
+"    ]"
+msgstr ""
+
+msgid ""
+"The structure of the returned array is compatible to [the returned value of th"
+"e Groonga's `table_list` command](http://groonga.org/docs/reference/commands/c"
+"olumn_list.html#return-value). See the linked document for more details."
+msgstr ""
+
+msgid ""
+"This command always returns a response with `200` as its `statusCode`, because"
+" this is a Groonga compatible command and errors of this command must be handl"
+"ed in the way same to Groonga's one."
+msgstr ""
+
+msgid "Response body's details:"
+msgstr ""
+
+msgid ""
+"Status code\n"
+": An integer which means the operation's result. Possible values are:"
+msgstr ""
+
+msgid ""
+"   * `0` (`Droonga::GroongaHandler::Status::SUCCESS`) : Successfully processed"
+".\n"
+"   * `-22` (`Droonga::GroongaHandler::Status::INVALID_ARGUMENT`) : There is an"
+"y invalid argument."
+msgstr ""
+
+msgid ""
+"Start time\n"
+": An UNIX time which the operation was started on."
+msgstr ""
+
+msgid ""
+"Elapsed time\n"
+": A decimal of seconds meaning the elapsed time for the operation."
+msgstr ""

  Added: _po/ja/reference/1.1.0/commands/column-remove/index.po (+173 -0) 100644
===================================================================
--- /dev/null
+++ _po/ja/reference/1.1.0/commands/column-remove/index.po    2014-11-30 23:20:40 +0900 (179fa63)
@@ -0,0 +1,173 @@
+msgid ""
+msgstr ""
+"Project-Id-Version: PACKAGE VERSION\n"
+"PO-Revision-Date: 2014-11-30 23:19+0900\n"
+"Language: ja\n"
+"MIME-Version: 1.0\n"
+"Content-Type: text/plain; charset=UTF-8\n"
+"Content-Transfer-Encoding: 8bit\n"
+"Plural-Forms: nplurals=1; plural=0;\n"
+
+msgid ""
+"---\n"
+"title: column_remove\n"
+"layout: en\n"
+"---"
+msgstr ""
+
+msgid ""
+"* TOC\n"
+"{:toc}"
+msgstr ""
+
+msgid "## Abstract {#abstract}"
+msgstr ""
+
+msgid "The `column_remove` command removes an existing column in a table."
+msgstr ""
+
+msgid ""
+"This is compatible to [the `column_remove` command of the Groonga](http://groo"
+"nga.org/docs/reference/commands/column_remove.html)."
+msgstr ""
+
+msgid "## API types {#api-types}"
+msgstr ""
+
+msgid "### HTTP {#api-types-http}"
+msgstr ""
+
+msgid ""
+"Request endpoint\n"
+": `(Document Root)/d/column_remove`"
+msgstr ""
+
+msgid ""
+"Request methd\n"
+": `GET`"
+msgstr ""
+
+msgid ""
+"Request URL parameters\n"
+": Same to the list of [parameters](#parameters)."
+msgstr ""
+
+msgid ""
+"Request body\n"
+": Nothing."
+msgstr ""
+
+msgid ""
+"Response body\n"
+": A [response message](#response)."
+msgstr ""
+
+msgid "### REST {#api-types-rest}"
+msgstr ""
+
+msgid "Not supported."
+msgstr ""
+
+msgid "### Fluentd {#api-types-fluentd}"
+msgstr ""
+
+msgid ""
+"Style\n"
+": Request-Response. One response message is always returned per one request."
+msgstr ""
+
+msgid ""
+"`type` of the request\n"
+": `column_remove`"
+msgstr ""
+
+msgid ""
+"`body` of the request\n"
+": A hash of [parameters](#parameters)."
+msgstr ""
+
+msgid ""
+"`type` of the response\n"
+": `column_remove.result`"
+msgstr ""
+
+msgid "## Parameter syntax {#syntax}"
+msgstr ""
+
+msgid ""
+"    {\n"
+"      \"table\" : \"<Name of the table>\",\n"
+"      \"name\"  : \"<Name of the column>\"\n"
+"    }"
+msgstr ""
+
+msgid "## Parameter details {#parameters}"
+msgstr ""
+
+msgid "All parameters are required."
+msgstr ""
+
+msgid ""
+"They are compatible to [the parameters of the `column_remove` command of the G"
+"roonga](http://groonga.org/docs/reference/commands/column_remove.html#paramete"
+"rs). See the linked document for more details."
+msgstr ""
+
+msgid "## Responses {#response}"
+msgstr ""
+
+msgid "This returns an array meaning the result of the operation, as the `body`."
+msgstr ""
+
+msgid ""
+"    [\n"
+"      [\n"
+"        <Groonga's status code>,\n"
+"        <Start time>,\n"
+"        <Elapsed time>\n"
+"      ],\n"
+"      <Column is successfully removed or not>\n"
+"    ]"
+msgstr ""
+
+msgid ""
+"This command always returns a response with `200` as its `statusCode`, because"
+" this is a Groonga compatible command and errors of this command must be handl"
+"ed in the way same to Groonga's one."
+msgstr ""
+
+msgid "Response body's details:"
+msgstr ""
+
+msgid ""
+"Status code\n"
+": An integer which means the operation's result. Possible values are:"
+msgstr ""
+
+msgid ""
+"   * `0` (`Droonga::GroongaHandler::Status::SUCCESS`) : Successfully processed"
+".\n"
+"   * `-22` (`Droonga::GroongaHandler::Status::INVALID_ARGUMENT`) : There is an"
+"y invalid argument."
+msgstr ""
+
+msgid ""
+"Start time\n"
+": An UNIX time which the operation was started on."
+msgstr ""
+
+msgid ""
+"Elapsed time\n"
+": A decimal of seconds meaning the elapsed time for the operation."
+msgstr ""
+
+msgid ""
+"Column is successfully removed or not\n"
+": A boolean value meaning the column was successfully removed or not. Possible"
+" values are:"
+msgstr ""
+
+msgid ""
+"   * `true`:The column was successfully removed.\n"
+"   * `false`:The column was not removed."
+msgstr ""

  Added: _po/ja/reference/1.1.0/commands/column-rename/index.po (+174 -0) 100644
===================================================================
--- /dev/null
+++ _po/ja/reference/1.1.0/commands/column-rename/index.po    2014-11-30 23:20:40 +0900 (c3ba0d5)
@@ -0,0 +1,174 @@
+msgid ""
+msgstr ""
+"Project-Id-Version: PACKAGE VERSION\n"
+"PO-Revision-Date: 2014-11-30 23:19+0900\n"
+"Language: ja\n"
+"MIME-Version: 1.0\n"
+"Content-Type: text/plain; charset=UTF-8\n"
+"Content-Transfer-Encoding: 8bit\n"
+"Plural-Forms: nplurals=1; plural=0;\n"
+
+msgid ""
+"---\n"
+"title: column_rename\n"
+"layout: en\n"
+"---"
+msgstr ""
+
+msgid ""
+"* TOC\n"
+"{:toc}"
+msgstr ""
+
+msgid "## Abstract {#abstract}"
+msgstr ""
+
+msgid "The `column_rename` command renames an existing column in a table."
+msgstr ""
+
+msgid ""
+"This is compatible to [the `column_rename` command of the Groonga](http://groo"
+"nga.org/docs/reference/commands/column_rename.html)."
+msgstr ""
+
+msgid "## API types {#api-types}"
+msgstr ""
+
+msgid "### HTTP {#api-types-http}"
+msgstr ""
+
+msgid ""
+"Request endpoint\n"
+": `(Document Root)/d/column_rename`"
+msgstr ""
+
+msgid ""
+"Request methd\n"
+": `GET`"
+msgstr ""
+
+msgid ""
+"Request URL parameters\n"
+": Same to the list of [parameters](#parameters)."
+msgstr ""
+
+msgid ""
+"Request body\n"
+": Nothing."
+msgstr ""
+
+msgid ""
+"Response body\n"
+": A [response message](#response)."
+msgstr ""
+
+msgid "### REST {#api-types-rest}"
+msgstr ""
+
+msgid "Not supported."
+msgstr ""
+
+msgid "### Fluentd {#api-types-fluentd}"
+msgstr ""
+
+msgid ""
+"Style\n"
+": Request-Response. One response message is always returned per one request."
+msgstr ""
+
+msgid ""
+"`type` of the request\n"
+": `column_rename`"
+msgstr ""
+
+msgid ""
+"`body` of the request\n"
+": A hash of [parameters](#parameters)."
+msgstr ""
+
+msgid ""
+"`type` of the response\n"
+": `column_rename.result`"
+msgstr ""
+
+msgid "## Parameter syntax {#syntax}"
+msgstr ""
+
+msgid ""
+"    {\n"
+"      \"table\"    : \"<Name of the table>\",\n"
+"      \"name\"     : \"<Current name of the column>\",\n"
+"      \"new_name\" : \"<New name of the column>\"\n"
+"    }"
+msgstr ""
+
+msgid "## Parameter details {#parameters}"
+msgstr ""
+
+msgid "All parameters are required."
+msgstr ""
+
+msgid ""
+"They are compatible to [the parameters of the `column_rename` command of the G"
+"roonga](http://groonga.org/docs/reference/commands/column_rename.html#paramete"
+"rs). See the linked document for more details."
+msgstr ""
+
+msgid "## Responses {#response}"
+msgstr ""
+
+msgid "This returns an array meaning the result of the operation, as the `body`."
+msgstr ""
+
+msgid ""
+"    [\n"
+"      [\n"
+"        <Groonga's status code>,\n"
+"        <Start time>,\n"
+"        <Elapsed time>\n"
+"      ],\n"
+"      <Column is successfully renamed or not>\n"
+"    ]"
+msgstr ""
+
+msgid ""
+"This command always returns a response with `200` as its `statusCode`, because"
+" this is a Groonga compatible command and errors of this command must be handl"
+"ed in the way same to Groonga's one."
+msgstr ""
+
+msgid "Response body's details:"
+msgstr ""
+
+msgid ""
+"Status code\n"
+": An integer which means the operation's result. Possible values are:"
+msgstr ""
+
+msgid ""
+"   * `0` (`Droonga::GroongaHandler::Status::SUCCESS`) : Successfully processed"
+".\n"
+"   * `-22` (`Droonga::GroongaHandler::Status::INVALID_ARGUMENT`) : There is an"
+"y invalid argument."
+msgstr ""
+
+msgid ""
+"Start time\n"
+": An UNIX time which the operation was started on."
+msgstr ""
+
+msgid ""
+"Elapsed time\n"
+": A decimal of seconds meaning the elapsed time for the operation."
+msgstr ""
+
+msgid ""
+"Column is successfully renamed or not\n"
+": A boolean value meaning the column was successfully renamed or not. Possible"
+" values are:"
+msgstr ""
+
+msgid ""
+"   * `true`:The column was successfully renamed.\n"
+"   * `false`:The column was not renamed."
+msgstr ""

  Added: _po/ja/reference/1.1.0/commands/delete/index.po (+193 -0) 100644
===================================================================
--- /dev/null
+++ _po/ja/reference/1.1.0/commands/delete/index.po    2014-11-30 23:20:40 +0900 (05b338a)
@@ -0,0 +1,193 @@
+msgid ""
+msgstr ""
+"Project-Id-Version: PACKAGE VERSION\n"
+"PO-Revision-Date: 2014-11-30 23:19+0900\n"
+"Language: ja\n"
+"MIME-Version: 1.0\n"
+"Content-Type: text/plain; charset=UTF-8\n"
+"Content-Transfer-Encoding: 8bit\n"
+"Plural-Forms: nplurals=1; plural=0;\n"
+
+msgid ""
+"---\n"
+"title: delete\n"
+"layout: en\n"
+"---"
+msgstr ""
+
+msgid ""
+"* TOC\n"
+"{:toc}"
+msgstr ""
+
+msgid "## Abstract {#abstract}"
+msgstr ""
+
+msgid "The `delete` command removes records in a table."
+msgstr ""
+
+msgid ""
+"This is compatible to [the `delete` command of the Groonga](http://groonga.org"
+"/docs/reference/commands/delete.html)."
+msgstr ""
+
+msgid "## API types {#api-types}"
+msgstr ""
+
+msgid "### HTTP {#api-types-http}"
+msgstr ""
+
+msgid ""
+"Request endpoint\n"
+": `(Document Root)/d/delete`"
+msgstr ""
+
+msgid ""
+"Request methd\n"
+": `GET`"
+msgstr ""
+
+msgid ""
+"Request URL parameters\n"
+": Same to the list of [parameters](#parameters)."
+msgstr ""
+
+msgid ""
+"Request body\n"
+": Nothing."
+msgstr ""
+
+msgid ""
+"Response body\n"
+": A [response message](#response)."
+msgstr ""
+
+msgid "### REST {#api-types-rest}"
+msgstr ""
+
+msgid "Not supported."
+msgstr ""
+
+msgid "### Fluentd {#api-types-fluentd}"
+msgstr ""
+
+msgid ""
+"Style\n"
+": Request-Response. One response message is always returned per one request."
+msgstr ""
+
+msgid ""
+"`type` of the request\n"
+": `delete`"
+msgstr ""
+
+msgid ""
+"`body` of the request\n"
+": A hash of [parameters](#parameters)."
+msgstr ""
+
+msgid ""
+"`type` of the response\n"
+": `delete.result`"
+msgstr ""
+
+msgid "## Parameter syntax {#syntax}"
+msgstr ""
+
+msgid ""
+"    {\n"
+"      \"table\" : \"<Name of the table>\",\n"
+"      \"key\"   : \"<Key of the record>\"\n"
+"    }"
+msgstr ""
+
+msgid "or"
+msgstr ""
+
+msgid ""
+"    {\n"
+"      \"table\" : \"<Name of the table>\",\n"
+"      \"id\"    : \"<ID of the record>\"\n"
+"    }"
+msgstr ""
+
+msgid ""
+"    {\n"
+"      \"table\"  : \"<Name of the table>\",\n"
+"      \"filter\" : \"<Complex search conditions>\"\n"
+"    }"
+msgstr ""
+
+msgid "## Parameter details {#parameters}"
+msgstr ""
+
+msgid ""
+"All parameters except `table` are optional.\n"
+"However, you must specify one of `key`, `id`, or `filter` to specify the recor"
+"d (records) to be removed."
+msgstr ""
+
+msgid ""
+"They are compatible to [the parameters of the `delete` command of the Groonga]"
+"(http://groonga.org/docs/reference/commands/delete.html#parameters). See the l"
+"inked document for more details."
+msgstr ""
+
+msgid "## Responses {#response}"
+msgstr ""
+
+msgid "This returns an array meaning the result of the operation, as the `body`."
+msgstr ""
+
+msgid ""
+"    [\n"
+"      [\n"
+"        <Groonga's status code>,\n"
+"        <Start time>,\n"
+"        <Elapsed time>\n"
+"      ],\n"
+"      <Records are successfully removed or not>\n"
+"    ]"
+msgstr ""
+
+msgid ""
+"This command always returns a response with `200` as its `statusCode`, because"
+" this is a Groonga compatible command and errors of this command must be handl"
+"ed in the way same to Groonga's one."
+msgstr ""
+
+msgid "Response body's details:"
+msgstr ""
+
+msgid ""
+"Status code\n"
+": An integer which means the operation's result. Possible values are:"
+msgstr ""
+
+msgid ""
+"   * `0` (`Droonga::GroongaHandler::Status::SUCCESS`) : Successfully processed"
+".\n"
+"   * `-22` (`Droonga::GroongaHandler::Status::INVALID_ARGUMENT`) : There is an"
+"y invalid argument."
+msgstr ""
+
+msgid ""
+"Start time\n"
+": An UNIX time which the operation was started on."
+msgstr ""
+
+msgid ""
+"Elapsed time\n"
+": A decimal of seconds meaning the elapsed time for the operation."
+msgstr ""
+
+msgid ""
+"Records are successfully removed or not\n"
+": A boolean value meaning specified records were successfully removed or not. "
+"Possible values are:"
+msgstr ""
+
+msgid ""
+"   * `true`:Records were successfully removed.\n"
+"   * `false`:Records were not removed."
+msgstr ""

  Added: _po/ja/reference/1.1.0/commands/index.po (+46 -0) 100644
===================================================================
--- /dev/null
+++ _po/ja/reference/1.1.0/commands/index.po    2014-11-30 23:20:40 +0900 (009b8b9)
@@ -0,0 +1,46 @@
+msgid ""
+msgstr ""
+"Project-Id-Version: PACKAGE VERSION\n"
+"PO-Revision-Date: 2014-11-30 23:19+0900\n"
+"Language: ja\n"
+"MIME-Version: 1.0\n"
+"Content-Type: text/plain; charset=UTF-8\n"
+"Content-Transfer-Encoding: 8bit\n"
+"Plural-Forms: nplurals=1; plural=0;\n"
+
+msgid ""
+"---\n"
+"title: Commands\n"
+"layout: en\n"
+"---"
+msgstr ""
+
+msgid "Here are available commands"
+msgstr ""
+
+msgid "## Built-in commands"
+msgstr ""
+
+msgid ""
+" * [search](search/): Searches data\n"
+" * [add](add/): Adds a record\n"
+" * system: Reports system information of the cluster\n"
+"   * [system.status](system/status/): Reports status information of the cluste"
+"r"
+msgstr ""
+
+msgid "## Groonga compatible commands"
+msgstr ""
+
+msgid ""
+" * [column_create](column-create/)\n"
+" * [column_list](column-list/)\n"
+" * [column_remove](column-remove/)\n"
+" * [column_rename](column-rename/)\n"
+" * [delete](delete/)\n"
+" * [load](load/)\n"
+" * [select](select/)\n"
+" * [table_create](table-create/)\n"
+" * [table_list](table-list/)\n"
+" * [table_remove](table-remove/)"
+msgstr ""

  Added: _po/ja/reference/1.1.0/commands/load/index.po (+191 -0) 100644
===================================================================
--- /dev/null
+++ _po/ja/reference/1.1.0/commands/load/index.po    2014-11-30 23:20:40 +0900 (bb8aebd)
@@ -0,0 +1,191 @@
+msgid ""
+msgstr ""
+"Project-Id-Version: PACKAGE VERSION\n"
+"PO-Revision-Date: 2014-11-30 23:19+0900\n"
+"Language: ja\n"
+"MIME-Version: 1.0\n"
+"Content-Type: text/plain; charset=UTF-8\n"
+"Content-Transfer-Encoding: 8bit\n"
+"Plural-Forms: nplurals=1; plural=0;\n"
+
+msgid ""
+"---\n"
+"title: load\n"
+"layout: en\n"
+"---"
+msgstr ""
+
+msgid ""
+"* TOC\n"
+"{:toc}"
+msgstr ""
+
+msgid "## Abstract {#abstract}"
+msgstr ""
+
+msgid ""
+"The `load` command adds new records to the specified table.\n"
+"Column values of existing records are updated by new values, if the table has "
+"a primary key and there are existing records with specified keys."
+msgstr ""
+
+msgid ""
+"This is compatible to [the `load` command of the Groonga](http://groonga.org/d"
+"ocs/reference/commands/load.html)."
+msgstr ""
+
+msgid "## API types {#api-types}"
+msgstr ""
+
+msgid "### HTTP (GET) {#api-types-http-get}"
+msgstr ""
+
+msgid ""
+"Request endpoint\n"
+": `(Document Root)/d/load`"
+msgstr ""
+
+msgid ""
+"Request methd\n"
+": `GET`"
+msgstr ""
+
+msgid ""
+"Request URL parameters\n"
+": Same to the list of [parameters](#parameters)."
+msgstr ""
+
+msgid ""
+"Request body\n"
+": Nothing."
+msgstr ""
+
+msgid ""
+"Response body\n"
+": A [response message](#response)."
+msgstr ""
+
+msgid "### HTTP (POST) {#api-types-http-post}"
+msgstr ""
+
+msgid ""
+"Request methd\n"
+": `POST`"
+msgstr ""
+
+msgid ""
+"Request URL parameters\n"
+": Same to the list of [parameters](#parameters), except `values`."
+msgstr ""
+
+msgid ""
+"Request body\n"
+": The value for the [parameter](#parameters) `values`."
+msgstr ""
+
+msgid "### REST {#api-types-rest}"
+msgstr ""
+
+msgid "Not supported."
+msgstr ""
+
+msgid "### Fluentd {#api-types-fluentd}"
+msgstr ""
+
+msgid "## Parameter syntax {#syntax}"
+msgstr ""
+
+msgid ""
+"    {\n"
+"      \"values\"     : <Array of records to be loaded>,\n"
+"      \"table\"      : \"<Name of the table>\",\n"
+"      \"columns\"    : \"<List of column names for values, separated by ','>\",\n"
+"      \"ifexists\"   : \"<Grn_expr to determine records which should be updated>\""
+",\n"
+"      \"input_type\" : \"<Format type of the values>\"\n"
+"    }"
+msgstr ""
+
+msgid "## Parameter details {#parameters}"
+msgstr ""
+
+msgid "All parameters except `table` are optional."
+msgstr ""
+
+msgid ""
+"On the version {{ site.droonga_version }}, only following parameters are avail"
+"able. Others are simply ignored because they are not implemented."
+msgstr ""
+
+msgid ""
+" * `values`\n"
+" * `table`\n"
+" * `columns`"
+msgstr ""
+
+msgid ""
+"They are compatible to [the parameters of the `load` command of the Groonga](h"
+"ttp://groonga.org/docs/reference/commands/load.html#parameters). See the linke"
+"d document for more details."
+msgstr ""
+
+msgid ""
+"HTTP clients can send `values` as an URL parameter with `GET` method, or the r"
+"equest body with `POST` method.\n"
+"The URL parameter `values` is always ignored it it is sent with `POST` method."
+"\n"
+"You should send data with `POST` method if there is much data."
+msgstr ""
+
+msgid "## Responses {#response}"
+msgstr ""
+
+msgid "This returns an array meaning the result of the operation, as the `body`."
+msgstr ""
+
+msgid ""
+"    [\n"
+"      [\n"
+"        <Groonga's status code>,\n"
+"        <Start time>,\n"
+"        <Elapsed time>\n"
+"      ],\n"
+"      [<Number of loaded records>]\n"
+"    ]"
+msgstr ""
+
+msgid ""
+"This command always returns a response with `200` as its `statusCode`, because"
+" this is a Groonga compatible command and errors of this command must be handl"
+"ed in the way same to Groonga's one."
+msgstr ""
+
+msgid "Response body's details:"
+msgstr ""
+
+msgid ""
+"Status code\n"
+": An integer which means the operation's result. Possible values are:"
+msgstr ""
+
+msgid ""
+"   * `0` (`Droonga::GroongaHandler::Status::SUCCESS`) : Successfully processed"
+".\n"
+"   * `-22` (`Droonga::GroongaHandler::Status::INVALID_ARGUMENT`) : There is an"
+"y invalid argument."
+msgstr ""
+
+msgid ""
+"Start time\n"
+": An UNIX time which the operation was started on."
+msgstr ""
+
+msgid ""
+"Elapsed time\n"
+": A decimal of seconds meaning the elapsed time for the operation."
+msgstr ""
+
+msgid ""
+"Number of loaded records\n"
+": An positive integer meaning the number of added or updated records."
+msgstr ""

  Added: _po/ja/reference/1.1.0/commands/search/index.po (+1998 -0) 100644
===================================================================
--- /dev/null
+++ _po/ja/reference/1.1.0/commands/search/index.po    2014-11-30 23:20:40 +0900 (fcb1e80)
@@ -0,0 +1,1998 @@
+msgid ""
+msgstr ""
+"Project-Id-Version: PACKAGE VERSION\n"
+"PO-Revision-Date: 2014-11-30 23:19+0900\n"
+"Language: ja\n"
+"MIME-Version: 1.0\n"
+"Content-Type: text/plain; charset=UTF-8\n"
+"Content-Transfer-Encoding: 8bit\n"
+"Plural-Forms: nplurals=1; plural=0;\n"
+
+msgid ""
+"---\n"
+"title: search\n"
+"layout: en\n"
+"---"
+msgstr ""
+
+msgid ""
+"* TOC\n"
+"{:toc}"
+msgstr ""
+
+msgid "## Abstract {#abstract}"
+msgstr ""
+
+msgid ""
+"The `search` command finds records from the specified table based on given con"
+"ditions, and returns found records and/or related information."
+msgstr ""
+
+msgid ""
+"This is designed as the most basic (low layer) command on Droonga, to search i"
+"nformation from a database. When you want to add a new plugin including \"searc"
+"h\" feature, you should develop it as just a wrapper of this command, instead o"
+"f developing something based on more low level technologies."
+msgstr ""
+
+msgid "## API types {#api-types}"
+msgstr ""
+
+msgid "### HTTP {#api-types-http}"
+msgstr ""
+
+msgid ""
+"Request endpoint\n"
+": `(Document Root)/droonga/search`"
+msgstr ""
+
+msgid ""
+"Request methd\n"
+": `POST`"
+msgstr ""
+
+msgid ""
+"Request URL parameters\n"
+": Nothing."
+msgstr ""
+
+msgid ""
+"Request body\n"
+": A hash of [parameters](#parameters)."
+msgstr ""
+
+msgid ""
+"Response body\n"
+": A [response message](#response)."
+msgstr ""
+
+msgid "### REST {#api-types-rest}"
+msgstr ""
+
+msgid ""
+"Request endpoint\n"
+": `(Document Root)/tables/(table name)`"
+msgstr ""
+
+msgid ""
+"Request methd\n"
+": `GET`"
+msgstr ""
+
+msgid ""
+"Request URL parameters\n"
+": They are applied to corresponding [parameters](#parameters):"
+msgstr ""
+
+msgid ""
+"   * `query`: A string, applied to [`(root).(table name).condition.query`](#us"
+"age-condition-query-syntax).\n"
+"   * `match_to`: A comma-separated string, applied to [`(root).(table name).co"
+"ndition.matchTo`](#usage-condition-query-syntax).\n"
+"   * `sort_by`: A comma-separated string, applied to [`(root).(table name).sor"
+"tBy`](#query-sortBy).\n"
+"   * `attributes`: A comma-separated string, applied to [`(root).(table name)."
+"output.attributes`](#query-output).\n"
+"   * `offset`: An integer, applied to [`(root).(table name).output.offset`](#q"
+"uery-output).\n"
+"   * `limit`: An integer, applied to [`(root).(table name).output.limit`](#que"
+"ry-output).\n"
+"   * `timeout`: An integer, applied to [`(root).timeout`](#parameter-timeout)."
+msgstr ""
+
+msgid ""
+"<!--\n"
+"   * `group_by[(column name)][key]`: A string, applied to [`(root).(column nam"
+"e).groupBy.key`](#query-groupBy).\n"
+"   * `group_by[(column name)][max_n_sub_records]`: An integer, applied to [`(r"
+"oot).(column name).groupBy.maxNSubRecords`](#query-groupBy).\n"
+"   * `group_by[(column name)][attributes]`: A comma-separated string, applied "
+"to [`(root).(column name).output.attributes`](#query-output).\n"
+"   * `group_by[(column name)][attributes][(attribute name)][source]`: A string"
+", applied to [`(root).(column name).output.attributes.(attribute name).source`"
+"](#query-output).\n"
+"   * `group_by[(column name)][attributes][(attribute name)][attributes]`: A co"
+"mma-separated string, applied to [`(root).(column name).output.attributes.(att"
+"ribute name).attributes`](#query-output).\n"
+"   * `group_by[(column name)][limit]`: An integer, applied to [`(root).(column"
+" name).output.limit`](#query-output).\n"
+"-->"
+msgstr ""
+
+msgid "  For example:"
+msgstr ""
+
+msgid "   * `/tables/Store?query=NY&match_to=_key&attributes=_key,*&limit=10`"
+msgstr ""
+
+msgid ""
+"<!--\n"
+"   * `/tables/Store?query=NY&match_to=_key&attributes=_key,*&limit=10&group_by"
+"[location][key]=location&group_by[location][limit]=5&group_by[location][attrib"
+"utes]=_key,_nsubrecs`\n"
+"   * `/tables/Store?query=NY&match_to=_key&attributes=_key,*&limit=10&group_by"
+"[location][key]=location&group_by[location][limit]=5&group_by[location][attrib"
+"utes][_key][souce]=_key&group_by[location][attributes][_nsubrecs][souce]=_nsub"
+"recs`\n"
+"   * `/tables/Store?query=NY&match_to=_key&limit=0&group_by[location][key]=loc"
+"ation&group_by[location][max_n_sub_records]=5&group_by[location][limit]=5&grou"
+"p_by[location][attributes][sub_records][source]=_subrecs&group_by[location][at"
+"tributes][sub_records][attributes]=_key,location`\n"
+"-->"
+msgstr ""
+
+msgid ""
+"Request body\n"
+": Nothing."
+msgstr ""
+
+msgid "### Fluentd {#api-types-fluentd}"
+msgstr ""
+
+msgid ""
+"Style\n"
+": Request-Response. One response message is always returned per one request."
+msgstr ""
+
+msgid ""
+"`type` of the request\n"
+": `search`"
+msgstr ""
+
+msgid ""
+"`body` of the request\n"
+": A hash of [parameters](#parameters)."
+msgstr ""
+
+msgid ""
+"`type` of the response\n"
+": `search.result`"
+msgstr ""
+
+msgid "## Parameter syntax {#syntax}"
+msgstr ""
+
+msgid ""
+"    {\n"
+"      \"timeout\" : <Seconds to be timed out>,\n"
+"      \"queries\" : {\n"
+"        \"<Name of the query 1>\" : {\n"
+"          \"source\"    : \"<Name of a table or another query>\",\n"
+"          \"condition\" : <Search conditions>,\n"
+"          \"sortBy\"    : <Sort conditions>,\n"
+"          \"groupBy\"   : <Group conditions>,\n"
+"          \"output\"    : <Output conditions>\n"
+"        },\n"
+"        \"<Name of the query 2>\" : { ... },\n"
+"        ...\n"
+"      }\n"
+"    }"
+msgstr ""
+
+msgid "## Usage {#usage}"
+msgstr ""
+
+msgid ""
+"This section describes how to use this command, via a typical usage with follo"
+"wing table:"
+msgstr ""
+
+msgid "Person table (with primary key):"
+msgstr ""
+
+msgid ""
+"|_key|name|age|sex|job|note|\n"
+"|Alice Arnold|Alice Arnold|20|female|announcer||\n"
+"|Alice Cooper|Alice Cooper|30|male|musician||\n"
+"|Alice Miller|Alice Miller|25|female|doctor||\n"
+"|Bob Dole|Bob Dole|42|male|lawer||\n"
+"|Bob Cousy|Bob Cousy|38|male|basketball player||\n"
+"|Bob Wolcott|Bob Wolcott|36|male|baseball player||\n"
+"|Bob Evans|Bob Evans|31|male|driver||\n"
+"|Bob Ross|Bob Ross|54|male|painter||\n"
+"|Lewis Carroll|Lewis Carroll|66|male|writer|the author of Alice's Adventures i"
+"n Wonderland|"
+msgstr ""
+
+msgid "Note: `name` and `note` are indexed with `TokensBigram`."
+msgstr ""
+
+msgid "### Basic usage {#usage-basic}"
+msgstr ""
+
+msgid "This is a simple example to output all records of the Person table:"
+msgstr ""
+
+msgid ""
+"    {\n"
+"      \"type\" : \"search\",\n"
+"      \"body\" : {\n"
+"        \"queries\" : {\n"
+"          \"people\" : {\n"
+"            \"source\" : \"Person\",\n"
+"            \"output\" : {\n"
+"              \"elements\"   : [\"count\", \"records\"],\n"
+"              \"attributes\" : [\"_key\", \"*\"],\n"
+"              \"limit\"      : -1\n"
+"            }\n"
+"          }\n"
+"        }\n"
+"      }\n"
+"    }"
+msgstr ""
+
+msgid ""
+"    => {\n"
+"         \"type\" : \"search.result\",\n"
+"         \"body\" : {\n"
+"           \"people\" : {\n"
+"             \"count\" : 9,\n"
+"             \"records\" : [\n"
+"               [\"Alice Arnold\", \"Alice Arnold\", 20, \"female\", \"announcer\", \"\"]"
+",\n"
+"               [\"Alice Cooper\", \"Alice Cooper\", 30, \"male\", \"musician\", \"\"],\n"
+"               [\"Alice Miller\", \"Alice Miller\", 25, \"male\", \"doctor\", \"\"],\n"
+"               [\"Bob Dole\", \"Bob Dole\", 42, \"male\", \"lawer\", \"\"],\n"
+"               [\"Bob Cousy\", \"Bob Cousy\", 38, \"male\", \"basketball player\", \"\"]"
+",\n"
+"               [\"Bob Wolcott\", \"Bob Wolcott\", 36, \"male\", \"baseball player\", \""
+"\"],\n"
+"               [\"Bob Evans\", \"Bob Evans\", 31, \"male\", \"driver\", \"\"],\n"
+"               [\"Bob Ross\", \"Bob Ross\", 54, \"male\", \"painter\", \"\"],\n"
+"               [\"Lewis Carroll\", \"Lewis Carroll\", 66, \"male\", \"writer\",\n"
+"                \"the author of Alice's Adventures in Wonderland\"]\n"
+"             ]\n"
+"           }\n"
+"         }\n"
+"       }"
+msgstr ""
+
+msgid ""
+"The name `people` is a temporary name for the search query and its result.\n"
+"A response of a `search` command will be returned as a hash, and the keys are "
+"same to keys of the given `queries`.\n"
+"So, this means: \"name the search result of the query as `people`\"."
+msgstr ""
+
+msgid "Why the command above returns all informations of the table? Because:"
+msgstr ""
+
+msgid ""
+" * There is no search condition. This command matches all records in the speci"
+"fied table, if no condition is specified.\n"
+" * [`output`](#query-output)'s `elements` contains `records` (and `count`) col"
+"umn(s). The parameter `elements` controls the returned information. Matched re"
+"cords are returned as `records`, the total number of matched records are retur"
+"ned as `count`.\n"
+" * [`output`](#query-output)'s `limit` is `-1`. The parameter `limit` controls"
+" the number of returned records, and `-1` means \"return all records\".\n"
+" * [`output`](#query-output)'s `attributes` contains two values `\"_key\"` and `"
+"\"*\"`. They mean \"all columns of the Person table, including the `_key`\" and it"
+" equals to `[\"_key\", \"name\", \"age\", \"sex\", \"job\", \"note\"]` in this case. The p"
+"arameter `attributes` controls which columns' value are returned."
+msgstr ""
+
+msgid "#### Search conditions {#usage-condition}"
+msgstr ""
+
+msgid ""
+"Search conditions are specified via the `condition` parameter. There are two s"
+"tyles of search conditions: \"script syntax\" and \"query syntax\". See [`conditio"
+"n` parameter](#query-condition) for more details."
+msgstr ""
+
+msgid "##### Search conditions in Script syntax {#usage-condition-script-syntax}"
+msgstr ""
+
+msgid ""
+"Search conditions in script syntax are similar to ECMAScript. For example, fol"
+"lowing query means \"find records that `name` contains `Alice` and `age` is lar"
+"ger than or equal to `25`\":"
+msgstr ""
+
+msgid ""
+"    {\n"
+"      \"type\" : \"search\",\n"
+"      \"body\" : {\n"
+"        \"queries\" : {\n"
+"          \"people\" : {\n"
+"            \"source\"    : \"Person\",\n"
+"            \"condition\" : \"name @ 'Alice' && age >= 25\"\n"
+"            \"output\"    : {\n"
+"              \"elements\"   : [\"count\", \"records\"],\n"
+"              \"attributes\" : [\"name\", \"age\"],\n"
+"              \"limit\"      : -1\n"
+"            }\n"
+"          }\n"
+"        }\n"
+"      }\n"
+"    }"
+msgstr ""
+
+msgid ""
+"    => {\n"
+"         \"type\" : \"search.result\",\n"
+"         \"body\" : {\n"
+"           \"people\" : {\n"
+"             \"count\" : 2,\n"
+"             \"records\" : [\n"
+"               [\"Alice Arnold\", 20],\n"
+"               [\"Alice Cooper\", 30],\n"
+"               [\"Alice Miller\", 25]\n"
+"             ]\n"
+"           }\n"
+"         }\n"
+"       }"
+msgstr ""
+
+msgid ""
+"[Script syntax is compatible to Groonga's one](http://groonga.org/docs/referen"
+"ce/grn_expr/script_syntax.html). See the linked document for more details."
+msgstr ""
+
+msgid "##### Search conditions in Query syntax {#usage-condition-query-syntax}"
+msgstr ""
+
+msgid ""
+"The query syntax is mainly designed for search boxes in webpages. For example,"
+" following query means \"find records that `name` or `note` contain the given w"
+"ord, and the word is `Alice`\":"
+msgstr ""
+
+msgid ""
+"    {\n"
+"      \"type\" : \"search\",\n"
+"      \"body\" : {\n"
+"        \"queries\" : {\n"
+"          \"people\" : {\n"
+"            \"source\"    : \"Person\",\n"
+"            \"condition\" : {\n"
+"              \"query\"   : \"Alice\",\n"
+"              \"matchTo\" : [\"name\", \"note\"]\n"
+"            },\n"
+"            \"output\"    : {\n"
+"              \"elements\"   : [\"count\", \"records\"],\n"
+"              \"attributes\" : [\"name\", \"note\"],\n"
+"              \"limit\"      : -1\n"
+"            }\n"
+"          }\n"
+"        }\n"
+"      }\n"
+"    }"
+msgstr ""
+
+msgid ""
+"    => {\n"
+"         \"type\" : \"search.result\",\n"
+"         \"body\" : {\n"
+"           \"people\" : {\n"
+"             \"count\" : 4,\n"
+"             \"records\" : [\n"
+"               [\"Alice Arnold\", \"\"],\n"
+"               [\"Alice Cooper\", \"\"],\n"
+"               [\"Alice Miller\", \"\"],\n"
+"               [\"Lewis Carroll\",\n"
+"                \"the author of Alice's Adventures in Wonderland\"]\n"
+"             ]\n"
+"           }\n"
+"         }\n"
+"       }"
+msgstr ""
+
+msgid ""
+"[Query syntax is compatible to Groonga's one](http://groonga.org/docs/referenc"
+"e/grn_expr/query_syntax.html). See the linked document for more details."
+msgstr ""
+
+msgid "#### Sorting of search results {#usage-sort}"
+msgstr ""
+
+msgid ""
+"Returned records can be sorted by conditions specified as the `sortBy` paramet"
+"er. For example, following query means \"sort results by their `age`, in ascend"
+"ing order\":"
+msgstr ""
+
+msgid ""
+"    {\n"
+"      \"type\" : \"search\",\n"
+"      \"body\" : {\n"
+"        \"queries\" : {\n"
+"          \"people\" : {\n"
+"            \"source\"    : \"Person\",\n"
+"            \"condition\" : \"name @ 'Alice'\"\n"
+"            \"sortBy\"    : [\"age\"],\n"
+"            \"output\"    : {\n"
+"              \"elements\"   : [\"count\", \"records\"],\n"
+"              \"attributes\" : [\"name\", \"age\"],\n"
+"              \"limit\"      : -1\n"
+"            }\n"
+"          }\n"
+"        }\n"
+"      }\n"
+"    }"
+msgstr ""
+
+msgid ""
+"    => {\n"
+"         \"type\" : \"search.result\",\n"
+"         \"body\" : {\n"
+"           \"people\" : {\n"
+"             \"count\" : 8,\n"
+"             \"records\" : [\n"
+"               [\"Alice Arnold\", 20],\n"
+"               [\"Alice Miller\", 25],\n"
+"               [\"Alice Cooper\", 30]\n"
+"             ]\n"
+"           }\n"
+"         }\n"
+"       }"
+msgstr ""
+
+msgid ""
+"If you add `-` before name of columns, then search results are returned in des"
+"cending order. For example:"
+msgstr ""
+
+msgid ""
+"    {\n"
+"      \"type\" : \"search\",\n"
+"      \"body\" : {\n"
+"        \"queries\" : {\n"
+"          \"people\" : {\n"
+"            \"source\"    : \"Person\",\n"
+"            \"condition\" : \"name @ 'Alice'\"\n"
+"            \"sortBy\"    : [\"-age\"],\n"
+"            \"output\"    : {\n"
+"              \"elements\"   : [\"count\", \"records\"],\n"
+"              \"attributes\" : [\"name\", \"age\"],\n"
+"              \"limit\"      : -1\n"
+"            }\n"
+"          }\n"
+"        }\n"
+"      }\n"
+"    }"
+msgstr ""
+
+msgid ""
+"    => {\n"
+"         \"type\" : \"search.result\",\n"
+"         \"body\" : {\n"
+"           \"people\" : {\n"
+"             \"count\" : 8,\n"
+"             \"records\" : [\n"
+"               [\"Alice Cooper\", 30],\n"
+"               [\"Alice Miller\", 25],\n"
+"               [\"Alice Arnold\", 20]\n"
+"             ]\n"
+"           }\n"
+"         }\n"
+"       }"
+msgstr ""
+
+msgid "See [`sortBy` parameter](#query-sortBy) for more details."
+msgstr ""
+
+msgid "#### Paging of search results {#usage-paging}"
+msgstr ""
+
+msgid ""
+"Search results can be retuned partially via `offset` and `limit` under the [`o"
+"utput`](#query-output) parameter. For example, following queries will return 2"
+"0 or more search results by 10's."
+msgstr ""
+
+msgid ""
+"    {\n"
+"      \"type\" : \"search\",\n"
+"      \"body\" : {\n"
+"        \"queries\" : {\n"
+"          \"people\" : {\n"
+"            \"source\" : \"Person\",\n"
+"            \"output\" : {\n"
+"              \"elements\"   : [\"count\", \"records\"],\n"
+"              \"attributes\" : [\"name\"],\n"
+"              \"offset\"     : 0,\n"
+"              \"limit\"      : 10\n"
+"            }\n"
+"          }\n"
+"        }\n"
+"      }\n"
+"    }"
+msgstr ""
+
+msgid "    => returns 10 results from the 1st to the 10th."
+msgstr ""
+
+msgid ""
+"    {\n"
+"      \"type\" : \"search\",\n"
+"      \"body\" : {\n"
+"        \"queries\" : {\n"
+"          \"people\" : {\n"
+"            \"source\" : \"Person\",\n"
+"            \"output\" : {\n"
+"              \"elements\"   : [\"count\", \"records\"],\n"
+"              \"attributes\" : [\"name\"],\n"
+"              \"offset\"     : 10,\n"
+"              \"limit\"      : 10\n"
+"            }\n"
+"          }\n"
+"        }\n"
+"      }\n"
+"    }"
+msgstr ""
+
+msgid "    => returns 10 results from the 11th to the 20th."
+msgstr ""
+
+msgid ""
+"    {\n"
+"      \"type\" : \"search\",\n"
+"      \"body\" : {\n"
+"        \"queries\" : {\n"
+"          \"people\" : {\n"
+"            \"source\" : \"Person\",\n"
+"            \"output\" : {\n"
+"              \"elements\"   : [\"count\", \"records\"],\n"
+"              \"attributes\" : [\"name\"],\n"
+"              \"offset\"     : 20,\n"
+"              \"limit\"      : 10\n"
+"            }\n"
+"          }\n"
+"        }\n"
+"      }\n"
+"    }"
+msgstr ""
+
+msgid "    => returns 10 results from the 21st to the 30th."
+msgstr ""
+
+msgid ""
+"The value `-1` is not recommended  for the `limit` parameter, in regular use. "
+"It will return too much results and increase traffic loads. Instead `100` or l"
+"ess value is recommended for the `limit` parameter. Then you should do paging "
+"by the `offset` parameter."
+msgstr ""
+
+msgid "See [`output` parameter](#query-output) for more details."
+msgstr ""
+
+msgid ""
+"Moreover, you can do paging via [the `sortBy` parameter](#query-sortBy-hash) a"
+"nd it will work faster than the paging by the `output` parameter. You should d"
+"o paging via the `sortBy` parameter instead of `output` as much as possible."
+msgstr ""
+
+msgid "#### Output format {#usage-format}"
+msgstr ""
+
+msgid ""
+"Search result records in examples above are shown as arrays of arrays, but the"
+"y can be returned as arrays of hashes by the [`output`](#query-output)'s `form"
+"at` parameter. If you specify `complex` for the `format`, then results are ret"
+"urned like:"
+msgstr ""
+
+msgid ""
+"    {\n"
+"      \"type\" : \"search\",\n"
+"      \"body\" : {\n"
+"        \"queries\" : {\n"
+"          \"people\" : {\n"
+"            \"source\" : \"Person\",\n"
+"            \"output\" : {\n"
+"              \"elements\"   : [\"count\", \"records\"],\n"
+"              \"attributes\" : [\"_key\", \"name\", \"age\", \"sex\", \"job\", \"note\"],\n"
+"              \"limit\"      : 3,\n"
+"              \"format\"     : \"complex\"\n"
+"            }\n"
+"          }\n"
+"        }\n"
+"      }\n"
+"    }"
+msgstr ""
+
+msgid ""
+"    => {\n"
+"         \"type\" : \"search.result\",\n"
+"         \"body\" : {\n"
+"           \"people\" : {\n"
+"             \"count\" : 9,\n"
+"             \"records\" : [\n"
+"               { \"_key\" : \"Alice Arnold\",\n"
+"                 \"name\" : \"Alice Arnold\",\n"
+"                 \"age\"  : 20,\n"
+"                 \"sex\"  : \"female\",\n"
+"                 \"job\"  : \"announcer\",\n"
+"                 \"note\" : \"\" },\n"
+"               { \"_key\" : \"Alice Cooper\",\n"
+"                 \"name\" : \"Alice Cooper\",\n"
+"                 \"age\"  : 30,\n"
+"                 \"sex\"  : \"male\",\n"
+"                 \"job\"  : \"musician\",\n"
+"                 \"note\" : \"\" },\n"
+"               { \"_key\" : \"Alice Miller\",\n"
+"                 \"name\" : \"Alice Miller\",\n"
+"                 \"age\"  : 25,\n"
+"                 \"sex\"  : \"female\",\n"
+"                 \"job\"  : \"doctor\",\n"
+"                 \"note\" : \"\" }\n"
+"             ]\n"
+"           }\n"
+"         }\n"
+"       }"
+msgstr ""
+
+msgid ""
+"Search result records will be returned as an array of hashes, when you specify"
+" `complex` as the value of the `format` parameter.\n"
+"Otherwise - `simple` or nothing is specified -, records are returned as an arr"
+"ay of arrays."
+msgstr ""
+
+msgid ""
+"See [`output` parameters](#query-output) and [responses](#response) for more d"
+"etails."
+msgstr ""
+
+msgid "### Advanced usage {#usage-advanced}"
+msgstr ""
+
+msgid "#### Grouping {#usage-group}"
+msgstr ""
+
+msgid ""
+"You can group search results by a column, via the [`groupBy`](#query-groupBy) "
+"parameters. For example, following query returns a result grouped by the `sex`"
+" column, with the count of original search results:"
+msgstr ""
+
+msgid ""
+"    {\n"
+"      \"type\" : \"search\",\n"
+"      \"body\" : {\n"
+"        \"queries\" : {\n"
+"          \"sexuality\" : {\n"
+"            \"source\"  : \"Person\",\n"
+"            \"groupBy\" : \"sex\",\n"
+"            \"output\"  : {\n"
+"              \"elements\"   : [\"count\", \"records\"],\n"
+"              \"attributes\" : [\"_key\", \"_nsubrecs\"],\n"
+"              \"limit\"      : -1\n"
+"            }\n"
+"          }\n"
+"        }\n"
+"      }\n"
+"    }"
+msgstr ""
+
+msgid ""
+"    => {\n"
+"         \"type\" : \"search.result\",\n"
+"         \"body\" : {\n"
+"           \"sexuality\" : {\n"
+"             \"count\" : 2,\n"
+"             \"records\" :\n"
+"               [\"female\", 2],\n"
+"               [\"male\", 7]\n"
+"             ]\n"
+"           }\n"
+"         }\n"
+"       }"
+msgstr ""
+
+msgid ""
+"The result means: \"There are two `female` records and seven `male` records, mo"
+"reover there are two types for the column `sex`."
+msgstr ""
+
+msgid ""
+"You can also extract the ungrouped record by the `maxNSubRecords` parameter an"
+"d the `_subrecs` virtual column. For example, following query returns the resu"
+"lt grouped by `sex` and extract two ungrouped records:"
+msgstr ""
+
+msgid ""
+"    {\n"
+"      \"type\" : \"search\",\n"
+"      \"body\" : {\n"
+"        \"queries\" : {\n"
+"          \"sexuality\" : {\n"
+"            \"source\"  : \"Person\",\n"
+"            \"groupBy\" : {\n"
+"              \"keys\"           : \"sex\",\n"
+"              \"maxNSubRecords\" : 2\n"
+"            },\n"
+"            \"output\"  : {\n"
+"              \"elements\"   : [\"count\", \"records\"],\n"
+"              \"attributes\" : [\n"
+"                \"_key\",\n"
+"                \"_nsubrecs\",\n"
+"                { \"label\"      : \"subrecords\",\n"
+"                  \"source\"     : \"_subrecs\",\n"
+"                  \"attributes\" : [\"name\"] }\n"
+"              ],\n"
+"              \"limit\"      : -1\n"
+"            }\n"
+"          }\n"
+"        }\n"
+"      }\n"
+"    }"
+msgstr ""
+
+msgid ""
+"    => {\n"
+"         \"type\" : \"search.result\",\n"
+"         \"body\" : {\n"
+"           \"sexuality\" : {\n"
+"             \"count\" : 2,\n"
+"             \"records\" :\n"
+"               [\"female\", 2, [[\"Alice Arnold\"], [\"Alice Miller\"]]],\n"
+"               [\"male\",   7, [[\"Alice Cooper\"], [\"Bob Dole\"]]]\n"
+"             ]\n"
+"           }\n"
+"         }\n"
+"       }"
+msgstr ""
+
+msgid "See [`groupBy` parameters](#query-groupBy) for more details."
+msgstr ""
+
+msgid "#### Multiple search queries in one request {#usage-multiple-queries}"
+msgstr ""
+
+msgid ""
+"Multiple queries can be appear in one `search` command. For example, following"
+" query searches people younger than 25 or older than 40:"
+msgstr ""
+
+msgid ""
+"    {\n"
+"      \"type\" : \"search\",\n"
+"      \"body\" : {\n"
+"        \"queries\" : {\n"
+"          \"junior\" : {\n"
+"            \"source\"    : \"Person\",\n"
+"            \"condition\" : \"age <= 25\",\n"
+"            \"output\"    : {\n"
+"              \"elements\"   : [\"count\", \"records\"],\n"
+"              \"attributes\" : [\"name\", \"age\"],\n"
+"              \"limit\"      : -1\n"
+"            }\n"
+"          },\n"
+"          \"senior\" : {\n"
+"            \"source\"    : \"Person\",\n"
+"            \"condition\" : \"age >= 40\",\n"
+"            \"output\"    : {\n"
+"              \"elements\"   : [\"count\", \"records\"],\n"
+"              \"attributes\" : [\"name\", \"age\"],\n"
+"              \"limit\"      : -1\n"
+"            }\n"
+"          }\n"
+"        }\n"
+"      }\n"
+"    }"
+msgstr ""
+
+msgid ""
+"    => {\n"
+"         \"type\" : \"search.result\",\n"
+"         \"body\" : {\n"
+"           \"junior\" : {\n"
+"             \"count\" : 2,\n"
+"             \"records\" : [\n"
+"               [\"Alice Arnold\", 20],\n"
+"               [\"Alice Miller\", 25]\n"
+"             ]\n"
+"           },\n"
+"           \"senior\" : {\n"
+"             \"count\" : 3,\n"
+"             \"records\" : [\n"
+"               [\"Bob Dole\", 42],\n"
+"               [\"Bob Ross\", 54],\n"
+"               [\"Lewis Carroll\", 66]\n"
+"             ]\n"
+"           }\n"
+"         }\n"
+"       }"
+msgstr ""
+
+msgid ""
+"Each search result can be identified by the temporary name given for each quer"
+"y."
+msgstr ""
+
+msgid "#### Chained search queries {#usage-chain}"
+msgstr ""
+
+msgid ""
+"You can specify not only an existing table, but search result of another query"
+" also, as the value of the \"source\" parameter. Chained search queries can do f"
+"lexible search in just one request."
+msgstr ""
+
+msgid ""
+"For example, the following query returns two results: records that their `name"
+"` contains `Alice`, and results grouped by their `sex` column:"
+msgstr ""
+
+msgid ""
+"    {\n"
+"      \"type\" : \"search\",\n"
+"      \"body\" : {\n"
+"        \"queries\" : {\n"
+"          \"people\" : {\n"
+"            \"source\"    : \"Person\",\n"
+"            \"condition\" : \"name @ 'Alice'\"\n"
+"            \"output\"    : {\n"
+"              \"elements\"   : [\"count\", \"records\"],\n"
+"              \"attributes\" : [\"name\", \"age\"],\n"
+"              \"limit\"      : -1\n"
+"            }\n"
+"          },\n"
+"          \"sexuality\" : {\n"
+"            \"source\"  : \"people\",\n"
+"            \"groupBy\" : \"sex\",\n"
+"            \"output\"  : {\n"
+"              \"elements\"   : [\"count\", \"records\"],\n"
+"              \"attributes\" : [\"_key\", \"_nsubrecs\"],\n"
+"              \"limit\"      : -1\n"
+"            }\n"
+"          }\n"
+"        }\n"
+"      }\n"
+"    }"
+msgstr ""
+
+msgid ""
+"    => {\n"
+"         \"type\" : \"search.result\",\n"
+"         \"body\" : {\n"
+"           \"people\" : {\n"
+"             \"count\" : 8,\n"
+"             \"records\" : [\n"
+"               [\"Alice Cooper\", 30],\n"
+"               [\"Alice Miller\", 25],\n"
+"               [\"Alice Arnold\", 20]\n"
+"             ]\n"
+"           },\n"
+"           \"sexuality\" : {\n"
+"             \"count\" : 2,\n"
+"             \"records\" :\n"
+"               [\"female\", 2],\n"
+"               [\"male\", 1]\n"
+"             ]\n"
+"           }\n"
+"         }\n"
+"       }"
+msgstr ""
+
+msgid ""
+"You can use search queries just internally, without output. For example, the f"
+"ollowing query does: 1) group records of the Person table by their `job` colum"
+"n, and 2) extract grouped results which have the text `player` in their `job`."
+" (*Note: The second query will be done without indexes, so it can be slow.)"
+msgstr ""
+
+msgid ""
+"    {\n"
+"      \"type\" : \"search\",\n"
+"      \"body\" : {\n"
+"        \"queries\" : {\n"
+"          \"allJob\" : {\n"
+"            \"source\"  : \"Person\",\n"
+"            \"groupBy\" : \"job\"\n"
+"          },\n"
+"          \"playerJob\" : {\n"
+"            \"source\"    : \"allJob\",\n"
+"            \"condition\" : \"_key @ `player`\",\n"
+"            \"output\"  : {\n"
+"              \"elements\"   : [\"count\", \"records\"],\n"
+"              \"attributes\" : [\"_key\", \"_nsubrecs\"],\n"
+"              \"limit\"      : -1\n"
+"            }\n"
+"          }\n"
+"        }\n"
+"      }\n"
+"    }"
+msgstr ""
+
+msgid ""
+"    => {\n"
+"         \"type\" : \"search.result\",\n"
+"         \"body\" : {\n"
+"           \"playerJob\" : {\n"
+"             \"count\" : 2,\n"
+"             \"records\" : [\n"
+"               [\"basketball player\", 1],\n"
+"               [\"baseball player\", 1]\n"
+"             ]\n"
+"           }\n"
+"         }\n"
+"       }"
+msgstr ""
+
+msgid "## Parameter details {#parameters}"
+msgstr ""
+
+msgid "### Container parameters {#container-parameters}"
+msgstr ""
+
+msgid "#### `timeout` {#parameter-timeout}"
+msgstr ""
+
+msgid ""
+"*Note: This parameter is not implemented yet on the version {{ site.droonga_ve"
+"rsion }}."
+msgstr ""
+
+msgid ""
+"Abstract\n"
+": Threshold to time out for the request."
+msgstr ""
+
+msgid ""
+"Value\n"
+": An integer in milliseconds."
+msgstr ""
+
+msgid ""
+"Default value\n"
+": `10000` (10 seconds)"
+msgstr ""
+
+msgid ""
+"Droonga Engine will return an error response instead of a search result, if th"
+"e search operation take too much time, longer than the given `timeout`.\n"
+"Clients may free resources for the search operation after the timeout."
+msgstr ""
+
+msgid "#### `queries` {#parameter-queries}"
+msgstr ""
+
+msgid ""
+"Abstract\n"
+": Search queries."
+msgstr ""
+
+msgid ""
+"Value\n"
+": A hash. Keys of the hash are query names, values of the hash are [queries (h"
+"ashes of query parameters)](#query-parameters)."
+msgstr ""
+
+msgid ""
+"Default value\n"
+": Nothing. This is a required parameter."
+msgstr ""
+
+msgid "You can put multiple search queries in a `search` request."
+msgstr ""
+
+msgid ""
+"On the {{ site.droonga_version }}, all search results for a request are return"
+"ed in one time. In the future, as an optional behaviour, each result can be re"
+"turned as separated messages progressively."
+msgstr ""
+
+msgid "### Parameters of each query {#query-parameters}"
+msgstr ""
+
+msgid "#### `source` {#query-source}"
+msgstr ""
+
+msgid ""
+"Abstract\n"
+": A source of a search operation."
+msgstr ""
+
+msgid ""
+"Value\n"
+": A name string of an existing table, or a name of another query."
+msgstr ""
+
+msgid ""
+"You can do a facet search, specifying a name of another search query as its so"
+"urce."
+msgstr ""
+
+msgid ""
+"The order of operations is automatically resolved by Droonga itself.\n"
+"You don't have to write queries in the order they should be operated in."
+msgstr ""
+
+msgid "#### `condition` {#query-condition}"
+msgstr ""
+
+msgid ""
+"Abstract\n"
+": Conditions to search records from the given source."
+msgstr ""
+
+msgid ""
+"Value\n"
+": Possible patterns:"
+msgstr ""
+
+msgid ""
+"  1. A [script syntax](http://groonga.org/docs/reference/grn_expr/script_synta"
+"x.html) string.\n"
+"  2. A hash including [script syntax](http://groonga.org/docs/reference/grn_ex"
+"pr/script_syntax.html) string.\n"
+"  3. A hash including [query syntax](http://groonga.org/docs/reference/grn_exp"
+"r/query_syntax.html) string.\n"
+"  4. An array of conditions from 1 to 3 and an operator."
+msgstr ""
+
+msgid ""
+"Default value\n"
+": Nothing."
+msgstr ""
+
+msgid ""
+"If no condition is given, then all records in the source will appear as the se"
+"arch result, for following operations and the output."
+msgstr ""
+
+msgid ""
+"##### Search condition in a Script syntax string {#query-condition-script-synt"
+"ax-string}"
+msgstr ""
+
+msgid "This is a sample condition in the script syntax:"
+msgstr ""
+
+msgid "    \"name == 'Alice' && age >= 20\""
+msgstr ""
+
+msgid ""
+"It means \"the value of the `name` column equals to `\"Alice\"`, and the value of"
+" the `age` column is `20` or more\"."
+msgstr ""
+
+msgid ""
+"See [the reference document of the script syntax on Groonga](http://groonga.or"
+"g/docs/reference/grn_expr/script_syntax.html) for more details."
+msgstr ""
+
+msgid ""
+"##### Search condition in a hash based on the Script syntax {#query-condition-"
+"script-syntax-hash}"
+msgstr ""
+
+msgid ""
+"In this pattern, you'll specify a search condition as a hash based on a \n"
+"[script syntax string](#query-condition-script-syntax-string), like:"
+msgstr ""
+
+msgid ""
+"    {\n"
+"      \"script\"      : \"name == 'Alice' && age >= 20\",\n"
+"      \"allowUpdate\" : true\n"
+"    }"
+msgstr ""
+
+msgid ""
+"(*Note: under construction because the specification of the `allowUpdate` para"
+"meter is not defined yet.)"
+msgstr ""
+
+msgid ""
+"##### Search condition in a hash based on the Query syntax {#query-condition-q"
+"uery-syntax-hash}"
+msgstr ""
+
+msgid "In this pattern, you'll specify a search condition as a hash like:"
+msgstr ""
+
+msgid ""
+"    {\n"
+"      \"query\"                    : \"Alice\",\n"
+"      \"matchTo\"                  : [\"name * 2\", \"job * 1\"],\n"
+"      \"defaultOperator\"          : \"&&\",\n"
+"      \"allowPragma\"              : true,\n"
+"      \"allowColumn\"              : true,\n"
+"      \"matchEscalationThreshold\" : 10\n"
+"    }"
+msgstr ""
+
+msgid ""
+"`query`\n"
+": A string to specify the main search query. In most cases, a text posted via "
+"a search box in a webpage will be given.\n"
+"  See [the document of the query syntax in Groonga](http://groonga.org/docs/re"
+"ference/grn_expr/query_syntax.html) for more details.\n"
+"  This parameter is always required."
+msgstr ""
+
+msgid ""
+"`matchTo`\n"
+": An array of strings, meaning the list of column names to be searched by defa"
+"ult. If you specify no column name in the `query`, it will work as a search qu"
+"ery for columns specified by this parameter.\n"
+"  You can apply weighting for each column, like `name * 2`.\n"
+"  This parameter is optional."
+msgstr ""
+
+msgid ""
+"`defaultOperator`\n"
+": A string to specify the default logical operator for multiple queries listed"
+" in the `query`. Possible values:"
+msgstr ""
+
+msgid ""
+"   * `\"&&\"` : means \"AND\" condition.\n"
+"   * `\"||\"` : means \"OR\" condition.\n"
+"   * `\"-\"`  : means [\"NOT\" condition](http://groonga.org/docs/reference/grn_ex"
+"pr/query_syntax.html#logical-not)."
+msgstr ""
+
+msgid "  This parameter is optional, the default value is `\"&&\"`."
+msgstr ""
+
+msgid ""
+"`allowPragma`\n"
+": A boolean value to allow (`true`) or disallow (`false`) to use \"pragma\" like"
+" `*E-1`, on the head of the `query`.\n"
+"  This parameter is optional, the default value is `true`."
+msgstr ""
+
+msgid ""
+"`allowColumn`\n"
+": A boolean value to allow (`true`) or disallow (`false`) to specify column na"
+"me for each query in the `query`, like `name:Alice`.\n"
+"  This parameter is optional, the default value is `true`."
+msgstr ""
+
+msgid ""
+"`allowLeadingNot`\n"
+": A boolean value to allow (`true`) or disallow (`false`) to appear \"negative "
+"expression\" on the first query in the `query`, like `-foobar`.\n"
+"  This parameter is optional, the default value is `false`."
+msgstr ""
+
+msgid ""
+"`matchEscalationThreshold`\n"
+": An integer to specify the threshold to escalate search methods.\n"
+"  When the number of search results by indexes is smaller than this value, the"
+"n Droonga does the search based on partial matching, etc.\n"
+"  See also [the specification of the search behavior of Groonga](http://groong"
+"a.org/docs/spec/search.html) for more details.\n"
+"  This parameter is optional, the default value is `0`."
+msgstr ""
+
+msgid "##### Complex search condition as an array {#query-condition-array}"
+msgstr ""
+
+msgid "In this pattern, you'll specify a search condition as an array like:"
+msgstr ""
+
+msgid ""
+"    [\n"
+"      \"&&\",\n"
+"      <search condition 1>,\n"
+"      <search condition 2>,\n"
+"      ...\n"
+"    ]"
+msgstr ""
+
+msgid "The fist element of the array is an operator string. Possible values:"
+msgstr ""
+
+msgid ""
+" * `\"&&\"` : means \"AND\" condition.\n"
+" * `\"||\"` : means \"OR\" condition.\n"
+" * `\"-\"`  : means [\"NOT\" condition](http://groonga.org/docs/reference/grn_expr"
+"/query_syntax.html#logical-not)."
+msgstr ""
+
+msgid ""
+"Rest elements are logically operated based on the operator.\n"
+"For example this is an \"AND\" operated condition based on two conditions, means"
+" \"the value of the `name` equals to `\"Alice\"`, and, the value of the `age` is "
+"`20` or more\":"
+msgstr ""
+
+msgid "    [\"&&\", \"name == 'Alice'\", \"age >= 20\"]"
+msgstr ""
+
+msgid ""
+"Nested array means more complex conditions. For example, this means \"`name` eq"
+"uals to `\"Alice\"` and `age` is `20` or more, but `job` does not equal to `\"eng"
+"ineer\"`\":"
+msgstr ""
+
+msgid ""
+"    [\n"
+"      \"-\",\n"
+"      [\"&&\", \"name == 'Alice'\", \"age >= 20\"],\n"
+"      \"job == 'engineer'\"\n"
+"    ]"
+msgstr ""
+
+msgid "#### `sortBy` {#query-sortBy}"
+msgstr ""
+
+msgid ""
+"Abstract\n"
+": Conditions for sorting and paging."
+msgstr ""
+
+msgid ""
+"  1. An array of column name strings.\n"
+"  2. A hash including an array of sort column name strings and paging conditio"
+"ns."
+msgstr ""
+
+msgid ""
+"If sort conditions are not specified, then all results will appear as-is, for "
+"following operations and the output."
+msgstr ""
+
+msgid "##### Basic sort condition {#query-sortBy-array}"
+msgstr ""
+
+msgid "Sort condition is given as an array of column name strings."
+msgstr ""
+
+msgid ""
+"At first Droonga tries to sort records by the value of the first given sort co"
+"lumn. After that, if there are multiple records which have same value for the "
+"column, then Droonga tries to sort them by the secondary given sort column. Th"
+"ese processes are repeated for all given sort columns."
+msgstr ""
+
+msgid "You must specify sort columns as an array, even if there is only one column."
+msgstr ""
+
+msgid ""
+"Records are sorted by the value of the column value, in an ascending order. Re"
+"sults can be sorted in descending order if sort column name has a prefix `-`."
+msgstr ""
+
+msgid ""
+"For example, this condition means \"sort records by the `name` at first in an a"
+"scending order, and sort them by their `age~ column in the descending order\":"
+msgstr ""
+
+msgid "    [\"name\", \"-age\"]"
+msgstr ""
+
+msgid "##### Paging of sorted results {#query-sortBy-hash}"
+msgstr ""
+
+msgid "Paging conditions can be specified as a part of a sort condition hash, like:"
+msgstr ""
+
+msgid ""
+"    {\n"
+"      \"keys\"   : [<Sort columns>],\n"
+"      \"offset\" : <Offset of paging>,\n"
+"      \"limit\"  : <Number of results to be extracted>\n"
+"    }"
+msgstr ""
+
+msgid ""
+"`keys`\n"
+": Sort conditions same to [the basic sort condition](#query-sortBy-array).\n"
+"  This parameter is always required."
+msgstr ""
+
+msgid ""
+"`offset`\n"
+": An integer meaning the offset to the paging of sorted results. Possible valu"
+"es are `0` or larger integers."
+msgstr ""
+
+msgid "  This parameter is optional and the default value is `0`."
+msgstr ""
+
+msgid ""
+"`limit`\n"
+": An integer meaning the number of sorted results to be extracted. Possible va"
+"lues are `-1`, `0`, or larger integers. The value `-1` means \"return all resul"
+"ts\"."
+msgstr ""
+
+msgid "  This parameter is optional and the default value is `-1`."
+msgstr ""
+
+msgid "For example, this condition extracts 10 sorted results from 11th to 20th:"
+msgstr ""
+
+msgid ""
+"    {\n"
+"      \"keys\"   : [\"name\", \"-age\"],\n"
+"      \"offset\" : 10,\n"
+"      \"limit\"  : 10\n"
+"    }"
+msgstr ""
+
+msgid ""
+"In most cases, paging by a sort condition is faster than paging by `output`'s "
+"`limit` and `output`, because this operation reduces the number of records."
+msgstr ""
+
+msgid "#### `groupBy` {#query-groupBy}"
+msgstr ""
+
+msgid ""
+"Abstract\n"
+": A condition for grouping of (sorted) search results."
+msgstr ""
+
+msgid ""
+"  1. A condition string to do grouping. (a column name or an expression)\n"
+"  2. A hash to specify a condition for grouping with details."
+msgstr ""
+
+msgid ""
+"If a condition for grouping is given, then grouped result records will appear "
+"as the result, for following operations and the output."
+msgstr ""
+
+msgid "##### Basic condition of grouping {#query-groupBy-string}"
+msgstr ""
+
+msgid ""
+"A condition of grouping is given as a string of a column name or an expression"
+"."
+msgstr ""
+
+msgid ""
+"Droonga groups (sorted) search result records, based on the value of the speci"
+"fied column. Then the result of the grouping will appear instead of search res"
+"ults from the `source`. Result records of a grouping will have following colum"
+"ns:"
+msgstr ""
+
+msgid ""
+"`_key`\n"
+": A value of the grouped column."
+msgstr ""
+
+msgid ""
+"`_nsubrecs`\n"
+": An integer meaning the number of grouped records."
+msgstr ""
+
+msgid ""
+"For example, this condition means \"group records by their `job` column's value"
+", with the number of grouped records for each value\":"
+msgstr ""
+
+msgid "    \"job\""
+msgstr ""
+
+msgid "##### Condition of grouping with details {#query-groupBy-hash}"
+msgstr ""
+
+msgid "A condition of grouping can include more options, like:"
+msgstr ""
+
+msgid ""
+"    {\n"
+"      \"key\"            : \"<Basic condition for grouping>\",\n"
+"      \"maxNSubRecords\" : <Number of sample records included into each grouped "
+"result>\n"
+"    }"
+msgstr ""
+
+msgid ""
+"`key`\n"
+": A string meaning [a basic condition of grouping](#query-groupBy-string).\n"
+"  This parameter is always required."
+msgstr ""
+
+msgid ""
+"`maxNSubRecords`\n"
+": An integer, meaning maximum number of sample records included into each grou"
+"ped result. Possible values are `0` or larger. `-1` is not acceptable."
+msgstr ""
+
+msgid "  This parameter is optional, the default value is `0`."
+msgstr ""
+
+msgid ""
+"For example, this condition will return results grouped by their `job` column "
+"with one sample record per a grouped result:"
+msgstr ""
+
+msgid ""
+"    {\n"
+"      \"key\"            : \"job\",\n"
+"      \"maxNSubRecords\" : 1\n"
+"    }"
+msgstr ""
+
+msgid ""
+"Grouped results will have all columns of [the result of the basic conditions f"
+"or grouping](#query-groupBy-string), and following extra columns:"
+msgstr ""
+
+msgid ""
+"`_subrecs`\n"
+": An array of sample records which have the value in its grouped column."
+msgstr ""
+
+msgid ""
+"*Note: On the version {{ site.droonga_version }}, too many records can be retu"
+"rned larger than the specified `maxNSubRecords`, if the dataset has multiple v"
+"olumes. This is a known problem and to be fixed in a future version."
+msgstr ""
+
+msgid "#### `output` {#query-output}"
+msgstr ""
+
+msgid ""
+"Abstract\n"
+": A output definition for a search result"
+msgstr ""
+
+msgid ""
+"Value\n"
+": A hash including information to control output format."
+msgstr ""
+
+msgid ""
+"If no `output` is given, then search results of the query won't be exported to"
+" the returned message.\n"
+"You can reduce processing time and traffic via omitting of `output` for tempor"
+"ary tables which are used only for grouping and so on."
+msgstr ""
+
+msgid "An output definition is given as a hash like:"
+msgstr ""
+
+msgid ""
+"    {\n"
+"      \"elements\"   : [<Names of elements to be exported>],\n"
+"      \"format\"     : \"<Format of each record>\",\n"
+"      \"offset\"     : <Offset of paging>,\n"
+"      \"limit\"      : <Number of records to be exported>,\n"
+"      \"attributes\" : <Definition of columnst to be exported for each record>\n"
+"    }"
+msgstr ""
+
+msgid ""
+"`elements`\n"
+": An array of strings, meaning the list of elements exported to the result of "
+"the search query in a [search response](#response).\n"
+"  Possible values are following, and you must specify it as an array even if y"
+"ou export just one element:"
+msgstr ""
+
+msgid ""
+"   * `\"startTime\"`\n"
+"   * `\"elapsedTime\"`\n"
+"   * `\"count\"`\n"
+"   * `\"attributes\"`\n"
+"   * `\"records\"`"
+msgstr ""
+
+msgid ""
+"  This parameter is optional, there is not default value. Nothing will be expo"
+"rted if no element is specified."
+msgstr ""
+
+msgid ""
+"`format`\n"
+": A string meaning the format of exported each record.\n"
+"  Possible values:"
+msgstr ""
+
+msgid ""
+"   * `\"simple\"`  : Each record will be exported as an array of column values.\n"
+"   * `\"complex\"` : Each record will be exported as a hash."
+msgstr ""
+
+msgid "  This parameter is optional, the default value is `\"simple\"`."
+msgstr ""
+
+msgid ""
+"`offset`\n"
+": An integer meaning the offset to the paging of exported records. Possible va"
+"lues are `0` or larger integers."
+msgstr ""
+
+msgid ""
+"`limit`\n"
+": An integer meaning the number of exported records. Possible values are `-1`,"
+" `0`, or larger integers. The value `-1` means \"export all records\"."
+msgstr ""
+
+msgid ""
+"`attributes`\n"
+": Definition of columns to be exported for each record.\n"
+"  Possible patterns:"
+msgstr ""
+
+msgid ""
+"   1. An array of column definitions.\n"
+"   2. A hash of column definitions."
+msgstr ""
+
+msgid "  Each column can be defined in one of following styles:"
+msgstr ""
+
+msgid ""
+"   * A name string of a column.\n"
+"     * `\"name\"` : Exports the value of the `name` column, as is.\n"
+"     * `\"age\"`  : Exports the value of the `age` column, as is.\n"
+"   * A hash with details:\n"
+"     * This exports the value of the `name` column as a column with different "
+"name `realName`."
+msgstr ""
+
+msgid "           { \"label\" : \"realName\", \"source\" : \"name\" }"
+msgstr ""
+
+msgid ""
+"     * This exports the snippet in HTML fragment as a column with the name `ht"
+"ml`."
+msgstr ""
+
+msgid "           { \"label\" : \"html\", \"source\": \"snippet_html(name)\" }"
+msgstr ""
+
+msgid ""
+"     * This exports a static value `\"Japan\"` for the `country` column of all r"
+"ecords.\n"
+"       (This will be useful for debugging, or a use case to try modification o"
+"f APIs.)"
+msgstr ""
+
+msgid "           { \"label\" : \"country\", \"source\" : \"'Japan'\" }"
+msgstr ""
+
+msgid ""
+"     * This exports a number of grouped records as the `\"itemsCount\"` column o"
+"f each record (grouped result)."
+msgstr ""
+
+msgid "           { \"label\" : \"itemsCount\", \"source\" : \"_nsubrecs\", }"
+msgstr ""
+
+msgid ""
+"     * This exports samples of the source records of grouped records, as the `"
+"\"items\"` column of grouped records.\n"
+"       The format of the `\"attributes\"` is just same to this section."
+msgstr ""
+
+msgid ""
+"           { \"label\" : \"items\", \"source\" : \"_subrecs\",\n"
+"             \"attributes\": [\"name\", \"price\"] }"
+msgstr ""
+
+msgid ""
+"  An array of column definitions can contain any type definition described abo"
+"ve, like:"
+msgstr ""
+
+msgid ""
+"      [\n"
+"        \"name\",\n"
+"        \"age\",\n"
+"        { \"label\" : \"realName\", \"source\" : \"name\" }\n"
+"      ]"
+msgstr ""
+
+msgid ""
+"  In this case, you can use a special column name `\"*\"` which means \"all colum"
+"ns except special columns like `_key`\"."
+msgstr ""
+
+msgid ""
+"    * `[\"*\"]` exports all columns (except `_key` and `_id`), as is.\n"
+"    * `[\"_key\", \"*\"]` exports exports all columns as is, with preceding `_key`"
+".\n"
+"    * `[\"*\", \"_nsubrecs\"]` exports exports all columns as is, with following `"
+"_nsubrecs`."
+msgstr ""
+
+msgid ""
+"  A hash of column definitions can contain any type definition described above"
+" except `label` of hashes, because keys of the hash means `label` of each colu"
+"mn, like:"
+msgstr ""
+
+msgid ""
+"      {\n"
+"        \"name\"     : \"name\",\n"
+"        \"age\"      : \"age\",\n"
+"        \"realName\" : { \"source\" : \"name\" },\n"
+"        \"country\"  : { \"source\" : \"'Japan'\" }\n"
+"      }"
+msgstr ""
+
+msgid ""
+"  This parameter is optional, there is no default value. No column will be exp"
+"orted if no column is specified."
+msgstr ""
+
+msgid "## Responses {#response}"
+msgstr ""
+
+msgid ""
+"This command returns a hash as the result as the `body`, with `200` as the `st"
+"atusCode`."
+msgstr ""
+
+msgid ""
+"Keys of the result hash is the name of each query (a result of a search query)"
+", values of the hash is the result of each [search query](#query-parameters), "
+"like:"
+msgstr ""
+
+msgid ""
+"    {\n"
+"      \"<Name of the query 1>\" : {\n"
+"        \"startTime\"   : \"<Time to start the operation>\",\n"
+"        \"elapsedTime\" : <Elapsed time to process the query, in milliseconds),\n"
+"        \"count\"       : <Number of records searched by the given conditions>,\n"
+"        \"attributes\"  : <Array or hash of exported columns>,\n"
+"        \"records\"     : [<Array of search result records>]\n"
+"      },\n"
+"      \"<Name of the query 2>\" : { ... },\n"
+"      ...\n"
+"    }"
+msgstr ""
+
+msgid ""
+"A hash of a search query's result can have following elements, but only some e"
+"lements specified in the `elements` of the [`output` parameter](#query-output)"
+" will appear in the response."
+msgstr ""
+
+msgid "### `startTime` {#response-query-startTime}"
+msgstr ""
+
+msgid "A local time string meaning the search operation is started."
+msgstr ""
+
+msgid ""
+"It is formatted in the [W3C-DTF](http://www.w3.org/TR/NOTE-datetime \"Date and "
+"Time Formats\"), with the time zone like:"
+msgstr ""
+
+msgid "    2013-11-29T08:15:30+09:00"
+msgstr ""
+
+msgid "### `elapsedTime` {#response-query-elapsedTime}"
+msgstr ""
+
+msgid "An integer meaning the elapsed time of the search operation, in milliseconds."
+msgstr ""
+
+msgid "### `count` {#response-query-count}"
+msgstr ""
+
+msgid ""
+"An integer meaning the total number of search result records.\n"
+"Paging options `offset` and `limit` in [`sortBy`](#query-sortBy) or [`output`]"
+"(#query-output) will not affect to this count."
+msgstr ""
+
+msgid "### `attributes` and `records` {#response-query-attributes-and-records}"
+msgstr ""
+
+msgid ""
+" * `attributes` is an array or a hash including information of exported column"
+"s for each record.\n"
+" * `records` is an array of search result records."
+msgstr ""
+
+msgid ""
+"There are two possible patterns of `attributes` and `records`, based on the [`"
+"output`](#query-output)'s `format` parameter."
+msgstr ""
+
+msgid "#### Simple format result {#response-query-simple-attributes-and-records}"
+msgstr ""
+
+msgid ""
+"A search result with `\"simple\"` as the value of `output`'s `format` will be re"
+"turned as a hash like:"
+msgstr ""
+
+msgid ""
+"    {\n"
+"      \"startTime\"   : \"<Time to start the operation>\",\n"
+"      \"elapsedTime\" : <Elapsed time to process the query),\n"
+"      \"count\"       : <Total number of search result records>,\n"
+"      \"attributes\"  : [\n"
+"        { \"name\"   : \"<Name of the column 1>\",\n"
+"          \"type\"   : \"<Type of the column 1>\",\n"
+"          \"vector\" : <It this column is a vector column?> },\n"
+"        { \"name\"   : \"<Name of the column 2>\",\n"
+"          \"type\"   : \"<Type of the column 2>\",\n"
+"          \"vector\" : <It this column is a vector column?> },\n"
+"        { \"name\"       : \"<Name of the column 3 (with subrecords)>\"\n"
+"          \"attributes\" : [\n"
+"          { \"name\"   : \"<Name of the column 3-1>\",\n"
+"            \"type\"   : \"<Type of the column 3-1>\",\n"
+"            \"vector\" : <It this column is a vector column?> },\n"
+"          { \"name\"   : \"<Name of the the column 3-2>\",\n"
+"            \"type\"   : \"<Type of the the column 3-2>\",\n"
+"            \"vector\" : <It this column is a vector column?> },\n"
+"          ],\n"
+"          ...\n"
+"        },\n"
+"        ...\n"
+"      ],\n"
+"      \"records\"     : [\n"
+"        [<Value of the column 1 of the record 1>,\n"
+"         <Value of the column 2 of the record 1>,\n"
+"         [\n"
+"          [<Value of the column of 3-1 of the subrecord 1 of the record 1>,\n"
+"           <Value of the column of 3-2 of the subrecord 2 of the record 1>,\n"
+"           ...],\n"
+"          [<Value of the column of 3-1 of the subrecord 1 of the record 1>,\n"
+"           <Value of the column of 3-2 of the subrecord 2 of the record 1>,\n"
+"           ...],\n"
+"          ...],\n"
+"         ...],\n"
+"        [<Value of the column 1 of the record 2>,\n"
+"         <Value of the column 2 of the record 2>,\n"
+"         [\n"
+"          [<Value of the column of 3-1 of the subrecord 1 of the record 2>,\n"
+"           <Value of the column of 3-2 of the subrecord 2 of the record 2>,\n"
+"           ...],\n"
+"          [<Value of the column of 3-1 of the subrecord 1 of the record 2>,\n"
+"           <Value of the column of 3-2 of the subrecord 2 of the record 2>,\n"
+"           ...],\n"
+"          ...],\n"
+"         ...],\n"
+"        ...\n"
+"      ]\n"
+"    }"
+msgstr ""
+
+msgid ""
+"This format is designed to reduce traffic with small responses, instead of use"
+"ful rich data format.\n"
+"Recommended for cases when the response can include too much records, or the s"
+"ervice can accept too much requests."
+msgstr ""
+
+msgid "##### `attributes` {#response-query-simple-attributes}"
+msgstr ""
+
+msgid ""
+"An array of column informations for each exported search result, ordered by [t"
+"he `output` parameter](#query-output)'s `attributes`."
+msgstr ""
+
+msgid ""
+"Each column information is returned as a hash in the form of one of these thre"
+"e variations corresponding to the kind of values. The hash will have the follo"
+"wing keys respectively:"
+msgstr ""
+
+msgid "###### For ordinal columns"
+msgstr ""
+
+msgid ""
+"`name`\n"
+": A string meaning the name (label) of the exported column. It is just same to"
+" labels defined in [the `output` parameter](#query-output)'s `attributes`."
+msgstr ""
+
+msgid ""
+"`type`\n"
+": A string meaning the value type of the column.\n"
+"  The type is indicated as one of [Groonga's primitive data formats](http://gr"
+"oonga.org/docs/reference/types.html), or a name of an existing table for refer"
+"ring columns."
+msgstr ""
+
+msgid ""
+"`vector`\n"
+": A boolean value meaning it is a [vector column](http://groonga.org/docs/tuto"
+"rial/data.html#vector-types) or not.\n"
+"  Possible values:"
+msgstr ""
+
+msgid ""
+"   * `true`  : It is a vector column.\n"
+"   * `false` : It is not a vector column, but a scalar column."
+msgstr ""
+
+msgid "###### For columns corresponding to subrecords"
+msgstr ""
+
+msgid ""
+"`attributes`\n"
+": An array including information about columns of subrecords. The form is the "
+"same as `attributes` for (main) records. This means `attributes` has recursive"
+" structure."
+msgstr ""
+
+msgid "###### For expressions"
+msgstr ""
+
+msgid "##### `records` {#response-query-simple-records}"
+msgstr ""
+
+msgid "An array of exported search result records."
+msgstr ""
+
+msgid ""
+"Each record is exported as an array of column values, ordered by the [`output`"
+" parameter](#query-output)'s `attributes`."
+msgstr ""
+
+msgid ""
+"A value of [date time type](http://groonga.org/docs/tutorial/data.html#date-an"
+"d-time-type) column will be returned as a string formatted in the [W3C-DTF](ht"
+"tp://www.w3.org/TR/NOTE-datetime \"Date and Time Formats\"), with the time zone."
+msgstr ""
+
+msgid "#### Complex format result {#response-query-complex-attributes-and-records}"
+msgstr ""
+
+msgid ""
+"A search result with `\"complex\"` as the value of `output`'s `format` will be r"
+"eturned as a hash like:"
+msgstr ""
+
+msgid ""
+"    {\n"
+"      \"startTime\"   : \"<Time to start the operation>\",\n"
+"      \"elapsedTime\" : <Elapsed time to process the query),\n"
+"      \"count\"       : <Total number of search result records>,\n"
+"      \"attributes\"  : {\n"
+"        \"<Name of the column 1>\" : { \"type\"   : \"<Type of the column 1>\",\n"
+"                                     \"vector\" : <It this column is a vector co"
+"lumn?> },\n"
+"        \"<Name of the column 2>\" : { \"type\"   : \"<Type of the column 2>\",\n"
+"                                     \"vector\" : <It this column is a vector co"
+"lumn?> },\n"
+"        \"<Name of the column 3 (with subrecords)>\" : {\n"
+"          \"attributes\" : {\n"
+"            \"<Name of the column 3-1>\" : { \"type\"   : \"<Type of the column 3-1"
+">\",\n"
+"                                           \"vector\" : <It this column is a vec"
+"tor column?> },\n"
+"            \"<Name of the column 3-2>\" : { \"type\"   : \"<Type of the column 3-2"
+">\",\n"
+"                                           \"vector\" : <It this column is a vec"
+"tor column?> },\n"
+"            ...\n"
+"          }\n"
+"        },\n"
+"        ...\n"
+"      ],\n"
+"      \"records\"     : [\n"
+"        { \"<Name of the column 1>\" : <Value of the column 1 of the record 1>,\n"
+"          \"<Name of the column 2>\" : <Value of the column 2 of the record 1>,\n"
+"          \"<Name of the column 3 (with subrecords)>\" : [\n"
+"            { \"<Name of the column 3-1>\" : <Value of the column 3-1 of the sub"
+"record 1 of record 1>,\n"
+"              \"<Name of the column 3-2>\" : <Value of the column 3-2 of the sub"
+"record 1 of record 1>,\n"
+"              ... },\n"
+"            { \"<Name of the column 3-1>\" : <Value of the column 3-1 of the sub"
+"record 2 of record 1>,\n"
+"              \"<Name of the column 3-2>\" : <Value of the column 3-2 of the sub"
+"record 2 of record 1>,\n"
+"              ... },\n"
+"            ...\n"
+"          ],\n"
+"          ...                                                                }"
+",\n"
+"        { \"<Name of the column 1>\" : <Value of the column 1 of the record 2>,\n"
+"          \"<Name of the column 2>\" : <Value of the column 2 of the record 2>,\n"
+"          \"<Name of the column 3 (with subrecords)>\" : [\n"
+"            { \"<Name of the column 3-1>\" : <Value of the column 3-1 of the sub"
+"record 1 of record 2>,\n"
+"              \"<Name of the column 3-2>\" : <Value of the column 3-2 of the sub"
+"record 1 of record 2>,\n"
+"              ... },\n"
+"            { \"<Name of the column 3-1>\" : <Value of the column 3-1 of the sub"
+"record 2 of record 2>,\n"
+"              \"<Name of the column 3-2>\" : <Value of the column 3-2 of the sub"
+"record 2 of record 2>,\n"
+"              ... },\n"
+"            ...\n"
+"          ],\n"
+"          ...                                                                }"
+",\n"
+"        ...\n"
+"      ]\n"
+"    }"
+msgstr ""
+
+msgid ""
+"This format is designed to keep human readability, instead of less traffic.\n"
+"Recommended for small traffic cases like development, debugging, features only"
+" for administrators, and so on."
+msgstr ""
+
+msgid "##### `attributes` {#response-query-complex-attributes}"
+msgstr ""
+
+msgid ""
+"A hash of column informations for each exported search result. Keys of the has"
+"h are column names defined by [the `output` parameter](#query-output)'s `attri"
+"butes`, values are informations of each column."
+msgstr ""
+
+msgid ""
+"`type`\n"
+": A string meaning the value type of the column.\n"
+"  The type is indicated as one of [Groonga's primitive data formats](http://gr"
+"oonga.org/docs/reference/types.html), or a name for an existing table for refe"
+"rring columns."
+msgstr ""
+
+msgid "Has no key. Just a empty hash `{}` will be returned."
+msgstr ""
+
+msgid "##### `records` {#response-query-complex-records}"
+msgstr ""
+
+msgid ""
+"Each record is exported as a hash. Keys of the hash are column names defined b"
+"y [`output` parameter](#query-output)'s `attributes`, values are column values"
+"."
+msgstr ""
+
+msgid "## Error types {#errors}"
+msgstr ""
+
+msgid ""
+"This command reports errors not only [general errors](/reference/message/#erro"
+"r) but also followings."
+msgstr ""
+
+msgid "### `MissingSourceParameter`"
+msgstr ""
+
+msgid ""
+"Means you've forgotten to specify the `source` parameter. The status code is `"
+"400`."
+msgstr ""
+
+msgid "### `UnknownSource`"
+msgstr ""
+
+msgid ""
+"Means there is no existing table and no other query with the name, for a `sour"
+"ce` of a query. The status code is `404`."
+msgstr ""
+
+msgid "### `CyclicSource`"
+msgstr ""
+
+msgid "Means there is any circular reference of sources. The status code is `400`."
+msgstr ""
+
+msgid "### `SearchTimeout`"
+msgstr ""
+
+msgid ""
+"Means the engine couldn't finish to process the request in the time specified "
+"as `timeout`. The status code is `500`."
+msgstr ""

  Added: _po/ja/reference/1.1.0/commands/select/index.po (+212 -0) 100644
===================================================================
--- /dev/null
+++ _po/ja/reference/1.1.0/commands/select/index.po    2014-11-30 23:20:40 +0900 (d855d3c)
@@ -0,0 +1,212 @@
+msgid ""
+msgstr ""
+"Project-Id-Version: PACKAGE VERSION\n"
+"PO-Revision-Date: 2014-11-30 23:19+0900\n"
+"Language: ja\n"
+"MIME-Version: 1.0\n"
+"Content-Type: text/plain; charset=UTF-8\n"
+"Content-Transfer-Encoding: 8bit\n"
+"Plural-Forms: nplurals=1; plural=0;\n"
+
+msgid ""
+"---\n"
+"title: select\n"
+"layout: en\n"
+"---"
+msgstr ""
+
+msgid ""
+"* TOC\n"
+"{:toc}"
+msgstr ""
+
+msgid "## Abstract {#abstract}"
+msgstr ""
+
+msgid ""
+"The `select` command finds records from the specified table based on given con"
+"ditions, and returns found records."
+msgstr ""
+
+msgid ""
+"This is compatible to [the `select` command of the Groonga](http://groonga.org"
+"/docs/reference/commands/select.html)."
+msgstr ""
+
+msgid "## API types {#api-types}"
+msgstr ""
+
+msgid "### HTTP {#api-types-http}"
+msgstr ""
+
+msgid ""
+"Request endpoint\n"
+": `(Document Root)/d/select`"
+msgstr ""
+
+msgid ""
+"Request methd\n"
+": `GET`"
+msgstr ""
+
+msgid ""
+"Request URL parameters\n"
+": Same to the list of [parameters](#parameters)."
+msgstr ""
+
+msgid ""
+"Request body\n"
+": Nothing."
+msgstr ""
+
+msgid ""
+"Response body\n"
+": A [response message](#response)."
+msgstr ""
+
+msgid "### REST {#api-types-rest}"
+msgstr ""
+
+msgid "Not supported."
+msgstr ""
+
+msgid "### Fluentd {#api-types-fluentd}"
+msgstr ""
+
+msgid ""
+"Style\n"
+": Request-Response. One response message is always returned per one request."
+msgstr ""
+
+msgid ""
+"`type` of the request\n"
+": `select`"
+msgstr ""
+
+msgid ""
+"`body` of the request\n"
+": A hash of [parameters](#parameters)."
+msgstr ""
+
+msgid ""
+"`type` of the response\n"
+": `select.result`"
+msgstr ""
+
+msgid "## Parameter syntax {#syntax}"
+msgstr ""
+
+msgid ""
+"    {\n"
+"      \"table\"            : \"<Name of the table>\",\n"
+"      \"match_columns\"    : \"<List of matching columns, separated by '||'>\",\n"
+"      \"query\"            : \"<Simple search conditions>\",\n"
+"      \"filter\"           : \"<Complex search conditions>\",\n"
+"      \"scorer\"           : \"<An expression to be applied to matched records>\","
+"\n"
+"      \"sortby\"           : \"<List of sorting columns, separated by ','>\",\n"
+"      \"output_columns\"   : \"<List of returned columns, separated by ','>\",\n"
+"      \"offset\"           : <Offset of paging>,\n"
+"      \"limit\"            : <Number of records to be returned>,\n"
+"      \"drilldown\"        : \"<Column name to be drilldown-ed>\",\n"
+"      \"drilldown_sortby\" : \"List of sorting columns for drilldown's result, se"
+"parated by ','>\",\n"
+"      \"drilldown_output_columns\" :\n"
+"                           \"List of returned columns for drilldown's result, s"
+"eparated by ','>\",\n"
+"      \"drilldown_offset\" : <Offset of drilldown's paging>,\n"
+"      \"drilldown_limit\"  : <Number of drilldown results to be returned>,\n"
+"      \"cache\"            : \"<Query cache option>\",\n"
+"      \"match_escalation_threshold\":\n"
+"                           <Threshold to escalate search methods>,\n"
+"      \"query_flags\"      : \"<Flags to customize query parameters>\",\n"
+"      \"query_expander\"   : \"<Arguments to expanding queries>\"\n"
+"    }"
+msgstr ""
+
+msgid "## Parameter details {#parameters}"
+msgstr ""
+
+msgid "All parameters except `table` are optional."
+msgstr ""
+
+msgid ""
+"On the version {{ site.droonga_version }}, only following parameters are avail"
+"able. Others are simply ignored because they are not implemented."
+msgstr ""
+
+msgid ""
+" * `table`\n"
+" * `match_columns`\n"
+" * `query`\n"
+" * `query_flags`\n"
+" * `filter`\n"
+" * `output_columns`\n"
+" * `offset`\n"
+" * `limit`\n"
+" * `drilldown`\n"
+" * `drilldown_output_columns`\n"
+" * `drilldown_sortby`\n"
+" * `drilldown_offset`\n"
+" * `drilldown_limit`"
+msgstr ""
+
+msgid ""
+"All parameters are compatible to [parameters for `select` command of the Groon"
+"ga](http://groonga.org/docs/reference/commands/select.html#parameters). See th"
+"e linked document for more details."
+msgstr ""
+
+msgid "## Responses {#response}"
+msgstr ""
+
+msgid "This returns an array including search results as the response's `body`."
+msgstr ""
+
+msgid ""
+"    [\n"
+"      [\n"
+"        <Groonga's status code>,\n"
+"        <Start time>,\n"
+"        <Elapsed time>\n"
+"      ],\n"
+"      <List of columns>\n"
+"    ]"
+msgstr ""
+
+msgid ""
+"The structure of the returned array is compatible to [the returned value of th"
+"e Groonga's `select` command](http://groonga.org/docs/reference/commands/selec"
+"t.html#id6). See the linked document for more details."
+msgstr ""
+
+msgid ""
+"This command always returns a response with `200` as its `statusCode`, because"
+" this is a Groonga compatible command and errors of this command must be handl"
+"ed in the way same to Groonga's one."
+msgstr ""
+
+msgid "Response body's details:"
+msgstr ""
+
+msgid ""
+"Status code\n"
+": An integer which means the operation's result. Possible values are:"
+msgstr ""
+
+msgid ""
+"   * `0` (`Droonga::GroongaHandler::Status::SUCCESS`) : Successfully processed"
+".\n"
+"   * `-22` (`Droonga::GroongaHandler::Status::INVALID_ARGUMENT`) : There is an"
+"y invalid argument."
+msgstr ""
+
+msgid ""
+"Start time\n"
+": An UNIX time which the operation was started on."
+msgstr ""
+
+msgid ""
+"Elapsed time\n"
+": A decimal of seconds meaning the elapsed time for the operation."
+msgstr ""

  Added: _po/ja/reference/1.1.0/commands/system/index.po (+24 -0) 100644
===================================================================
--- /dev/null
+++ _po/ja/reference/1.1.0/commands/system/index.po    2014-11-30 23:20:40 +0900 (0c85c98)
@@ -0,0 +1,24 @@
+msgid ""
+msgstr ""
+"Project-Id-Version: PACKAGE VERSION\n"
+"PO-Revision-Date: 2014-11-30 23:19+0900\n"
+"Language: ja\n"
+"MIME-Version: 1.0\n"
+"Content-Type: text/plain; charset=UTF-8\n"
+"Content-Transfer-Encoding: 8bit\n"
+"Plural-Forms: nplurals=1; plural=0;\n"
+
+msgid ""
+"---\n"
+"title: system\n"
+"layout: en\n"
+"---"
+msgstr ""
+
+msgid ""
+"`system` is a namespace for commands to report system information of the clust"
+"er."
+msgstr ""
+
+msgid " * [system.status](status/): Reports status information of the cluster"
+msgstr ""

  Added: _po/ja/reference/1.1.0/commands/system/status/index.po (+168 -0) 100644
===================================================================
--- /dev/null
+++ _po/ja/reference/1.1.0/commands/system/status/index.po    2014-11-30 23:20:40 +0900 (6c46fe6)
@@ -0,0 +1,168 @@
+msgid ""
+msgstr ""
+"Project-Id-Version: PACKAGE VERSION\n"
+"PO-Revision-Date: 2014-11-30 23:19+0900\n"
+"Language: ja\n"
+"MIME-Version: 1.0\n"
+"Content-Type: text/plain; charset=UTF-8\n"
+"Content-Transfer-Encoding: 8bit\n"
+"Plural-Forms: nplurals=1; plural=0;\n"
+
+msgid ""
+"---\n"
+"title: system.status\n"
+"layout: en\n"
+"---"
+msgstr ""
+
+msgid ""
+"* TOC\n"
+"{:toc}"
+msgstr ""
+
+msgid "## Abstract {#abstract}"
+msgstr ""
+
+msgid "The `system.status` command reports current status of the clsuter itself."
+msgstr ""
+
+msgid "## API types {#api-types}"
+msgstr ""
+
+msgid "### HTTP {#api-types-http}"
+msgstr ""
+
+msgid ""
+"Request endpoint\n"
+": `(Document Root)/droonga/system/status`"
+msgstr ""
+
+msgid ""
+"Request methd\n"
+": `GET`"
+msgstr ""
+
+msgid ""
+"Request URL parameters\n"
+": Nothing."
+msgstr ""
+
+msgid ""
+"Request body\n"
+": Nothing."
+msgstr ""
+
+msgid ""
+"Response body\n"
+": A [response message](#response)."
+msgstr ""
+
+msgid "### REST {#api-types-rest}"
+msgstr ""
+
+msgid "Not supported."
+msgstr ""
+
+msgid "### Fluentd {#api-types-fluentd}"
+msgstr ""
+
+msgid ""
+"Style\n"
+": Request-Response. One response message is always returned per one request."
+msgstr ""
+
+msgid ""
+"`type` of the request\n"
+": `system.status`"
+msgstr ""
+
+msgid ""
+"`body` of the request\n"
+": Nothing."
+msgstr ""
+
+msgid ""
+"`type` of the response\n"
+": `system.status.result`"
+msgstr ""
+
+msgid "## Parameter syntax {#syntax}"
+msgstr ""
+
+msgid "This command has no parameter."
+msgstr ""
+
+msgid "## Usage {#usage}"
+msgstr ""
+
+msgid ""
+"This command reports the list of nodes and their vital information.\n"
+"For example:"
+msgstr ""
+
+msgid ""
+"    {\n"
+"      \"type\" : \"system.status\",\n"
+"      \"body\" : {}\n"
+"    }"
+msgstr ""
+
+msgid ""
+"    => {\n"
+"         \"type\" : \"system.status.result\",\n"
+"         \"body\" : {\n"
+"           \"nodes\": {\n"
+"             \"192.168.0.10:10031/droonga\": {\n"
+"               \"live\": true\n"
+"             },\n"
+"             \"192.168.0.11:10031/droonga\": {\n"
+"               \"live\": false\n"
+"             }\n"
+"           }\n"
+"         }\n"
+"       }"
+msgstr ""
+
+msgid "## Responses {#response}"
+msgstr ""
+
+msgid ""
+"This returns a hash like following as the response's `body`, with `200` as its"
+" `statusCode`."
+msgstr ""
+
+msgid ""
+"    {\n"
+"      \"nodes\" : {\n"
+"        \"<Identifier of the node 1>\" : {\n"
+"          \"live\" : <Vital status of the node>\n"
+"        },\n"
+"        \"<Identifier of the node 2>\" : { ... },\n"
+"        ...\n"
+"      }\n"
+"    }"
+msgstr ""
+
+msgid ""
+"`nodes`\n"
+": A hash including information of nodes in the cluster.\n"
+"  Keys of the hash are identifiers of nodes defined in the `catalog.json`, wit"
+"h the format: `hostname:port/tag`.\n"
+"  Each value indicates status information of corresponding node, and have foll"
+"owing information:"
+msgstr ""
+
+msgid ""
+"  `live`\n"
+"  : A boolean value indicating vital state of the node.\n"
+"    If `true`, the node can process messages, and messages are delivered to it"
+".\n"
+"    Otherwise, the node doesn't process any message for now, because it is dow"
+"n or some reasons."
+msgstr ""
+
+msgid "## Error types {#errors}"
+msgstr ""
+
+msgid "This command reports [general errors](/reference/message/#error)."
+msgstr ""

  Added: _po/ja/reference/1.1.0/commands/table-create/index.po (+177 -0) 100644
===================================================================
--- /dev/null
+++ _po/ja/reference/1.1.0/commands/table-create/index.po    2014-11-30 23:20:40 +0900 (0e963bb)
@@ -0,0 +1,177 @@
+msgid ""
+msgstr ""
+"Project-Id-Version: PACKAGE VERSION\n"
+"PO-Revision-Date: 2014-11-30 23:19+0900\n"
+"Language: ja\n"
+"MIME-Version: 1.0\n"
+"Content-Type: text/plain; charset=UTF-8\n"
+"Content-Transfer-Encoding: 8bit\n"
+"Plural-Forms: nplurals=1; plural=0;\n"
+
+msgid ""
+"---\n"
+"title: table_create\n"
+"layout: en\n"
+"---"
+msgstr ""
+
+msgid ""
+"* TOC\n"
+"{:toc}"
+msgstr ""
+
+msgid "## Abstract {#abstract}"
+msgstr ""
+
+msgid "The `table_create` command creates a new table."
+msgstr ""
+
+msgid ""
+"This is compatible to [the `table_create` command of the Groonga](http://groon"
+"ga.org/docs/reference/commands/table_create.html)."
+msgstr ""
+
+msgid "## API types {#api-types}"
+msgstr ""
+
+msgid "### HTTP {#api-types-http}"
+msgstr ""
+
+msgid ""
+"Request endpoint\n"
+": `(Document Root)/d/table_create`"
+msgstr ""
+
+msgid ""
+"Request methd\n"
+": `GET`"
+msgstr ""
+
+msgid ""
+"Request URL parameters\n"
+": Same to the list of [parameters](#parameters)."
+msgstr ""
+
+msgid ""
+"Request body\n"
+": Nothing."
+msgstr ""
+
+msgid ""
+"Response body\n"
+": A [response message](#response)."
+msgstr ""
+
+msgid "### REST {#api-types-rest}"
+msgstr ""
+
+msgid "Not supported."
+msgstr ""
+
+msgid "### Fluentd {#api-types-fluentd}"
+msgstr ""
+
+msgid ""
+"Style\n"
+": Request-Response. One response message is always returned per one request."
+msgstr ""
+
+msgid ""
+"`type` of the request\n"
+": `table_create`"
+msgstr ""
+
+msgid ""
+"`body` of the request\n"
+": A hash of [parameters](#parameters)."
+msgstr ""
+
+msgid ""
+"`type` of the response\n"
+": `table_create.result`"
+msgstr ""
+
+msgid "## Parameter syntax {#syntax}"
+msgstr ""
+
+msgid ""
+"    {\n"
+"      \"name\"              : \"<Name of the table>\",\n"
+"      \"flags\"             : \"<Flags for the table>\",\n"
+"      \"key_type\"          : \"<Type of the primary key>\",\n"
+"      \"value_type\"        : \"<Type of the value>\",\n"
+"      \"default_tokenizer\" : \"<Default tokenizer>\",\n"
+"      \"normalizer\"        : \"<Normalizer>\"\n"
+"    }"
+msgstr ""
+
+msgid "## Parameter details {#parameters}"
+msgstr ""
+
+msgid "All parameters except `name` are optional."
+msgstr ""
+
+msgid ""
+"They are compatible to [the parameters of the `table_create` command of the Gr"
+"oonga](http://groonga.org/docs/reference/commands/table_create.html#parameters"
+"). See the linked document for more details."
+msgstr ""
+
+msgid "## Responses {#response}"
+msgstr ""
+
+msgid "This returns an array meaning the result of the operation, as the `body`."
+msgstr ""
+
+msgid ""
+"    [\n"
+"      [\n"
+"        <Groonga's status code>,\n"
+"        <Start time>,\n"
+"        <Elapsed time>\n"
+"      ],\n"
+"      <Table is successfully created or not>\n"
+"    ]"
+msgstr ""
+
+msgid ""
+"This command always returns a response with `200` as its `statusCode`, because"
+" this is a Groonga compatible command and errors of this command must be handl"
+"ed in the way same to Groonga's one."
+msgstr ""
+
+msgid "Response body's details:"
+msgstr ""
+
+msgid ""
+"Status code\n"
+": An integer which means the operation's result. Possible values are:"
+msgstr ""
+
+msgid ""
+"   * `0` (`Droonga::GroongaHandler::Status::SUCCESS`) : Successfully processed"
+".\n"
+"   * `-22` (`Droonga::GroongaHandler::Status::INVALID_ARGUMENT`) : There is an"
+"y invalid argument."
+msgstr ""
+
+msgid ""
+"Start time\n"
+": An UNIX time which the operation was started on."
+msgstr ""
+
+msgid ""
+"Elapsed time\n"
+": A decimal of seconds meaning the elapsed time for the operation."
+msgstr ""
+
+msgid ""
+"Table is successfully created or not\n"
+": A boolean value meaning the table was successfully created or not. Possible "
+"values are:"
+msgstr ""
+
+msgid ""
+"   * `true`:The table was successfully created.\n"
+"   * `false`:The table was not created."
+msgstr ""

  Added: _po/ja/reference/1.1.0/commands/table-list/index.po (+148 -0) 100644
===================================================================
--- /dev/null
+++ _po/ja/reference/1.1.0/commands/table-list/index.po    2014-11-30 23:20:40 +0900 (d3ddc64)
@@ -0,0 +1,148 @@
+msgid ""
+msgstr ""
+"Project-Id-Version: PACKAGE VERSION\n"
+"PO-Revision-Date: 2014-11-30 23:19+0900\n"
+"Language: ja\n"
+"MIME-Version: 1.0\n"
+"Content-Type: text/plain; charset=UTF-8\n"
+"Content-Transfer-Encoding: 8bit\n"
+"Plural-Forms: nplurals=1; plural=0;\n"
+
+msgid ""
+"---\n"
+"title: table_list\n"
+"layout: en\n"
+"---"
+msgstr ""
+
+msgid ""
+"* TOC\n"
+"{:toc}"
+msgstr ""
+
+msgid "## Abstract {#abstract}"
+msgstr ""
+
+msgid ""
+"The `table_list` command reports the list of all existing tables in the datase"
+"t."
+msgstr ""
+
+msgid ""
+"This is compatible to [the `table_list` command of the Groonga](http://groonga"
+".org/docs/reference/commands/table_list.html)."
+msgstr ""
+
+msgid "## API types {#api-types}"
+msgstr ""
+
+msgid "### HTTP {#api-types-http}"
+msgstr ""
+
+msgid ""
+"Request endpoint\n"
+": `(Document Root)/d/table_list`"
+msgstr ""
+
+msgid ""
+"Request methd\n"
+": `GET`"
+msgstr ""
+
+msgid ""
+"Request URL parameters\n"
+": Nothing."
+msgstr ""
+
+msgid ""
+"Request body\n"
+": Nothing."
+msgstr ""
+
+msgid ""
+"Response body\n"
+": A [response message](#response)."
+msgstr ""
+
+msgid "### REST {#api-types-rest}"
+msgstr ""
+
+msgid "Not supported."
+msgstr ""
+
+msgid "### Fluentd {#api-types-fluentd}"
+msgstr ""
+
+msgid ""
+"Style\n"
+": Request-Response. One response message is always returned per one request."
+msgstr ""
+
+msgid ""
+"`type` of the request\n"
+": `table_list`"
+msgstr ""
+
+msgid ""
+"`body` of the request\n"
+": `null` or a blank hash."
+msgstr ""
+
+msgid ""
+"`type` of the response\n"
+": `table_list.result`"
+msgstr ""
+
+msgid "## Responses {#response}"
+msgstr ""
+
+msgid "This returns an array including list of tables as the response's `body`."
+msgstr ""
+
+msgid ""
+"    [\n"
+"      [\n"
+"        <Groonga's status code>,\n"
+"        <Start time>,\n"
+"        <Elapsed time>\n"
+"      ],\n"
+"      <List of tables>\n"
+"    ]"
+msgstr ""
+
+msgid ""
+"The structure of the returned array is compatible to [the returned value of th"
+"e Groonga's `table_list` command](http://groonga.org/docs/reference/commands/t"
+"able_list.html#id5). See the linked document for more details."
+msgstr ""
+
+msgid ""
+"This command always returns a response with `200` as its `statusCode`, because"
+" this is a Groonga compatible command and errors of this command must be handl"
+"ed in the way same to Groonga's one."
+msgstr ""
+
+msgid "Response body's details:"
+msgstr ""
+
+msgid ""
+"Status code\n"
+": An integer which means the operation's result. Possible values are:"
+msgstr ""
+
+msgid ""
+"   * `0` (`Droonga::GroongaHandler::Status::SUCCESS`) : Successfully processed"
+".\n"
+"   * `-22` (`Droonga::GroongaHandler::Status::INVALID_ARGUMENT`) : There is an"
+"y invalid argument."
+msgstr ""
+
+msgid ""
+"Start time\n"
+": An UNIX time which the operation was started on."
+msgstr ""
+
+msgid ""
+"Elapsed time\n"
+": A decimal of seconds meaning the elapsed time for the operation."
+msgstr ""

  Added: _po/ja/reference/1.1.0/commands/table-remove/index.po (+172 -0) 100644
===================================================================
--- /dev/null
+++ _po/ja/reference/1.1.0/commands/table-remove/index.po    2014-11-30 23:20:40 +0900 (df19522)
@@ -0,0 +1,172 @@
+msgid ""
+msgstr ""
+"Project-Id-Version: PACKAGE VERSION\n"
+"PO-Revision-Date: 2014-11-30 23:19+0900\n"
+"Language: ja\n"
+"MIME-Version: 1.0\n"
+"Content-Type: text/plain; charset=UTF-8\n"
+"Content-Transfer-Encoding: 8bit\n"
+"Plural-Forms: nplurals=1; plural=0;\n"
+
+msgid ""
+"---\n"
+"title: table_remove\n"
+"layout: en\n"
+"---"
+msgstr ""
+
+msgid ""
+"* TOC\n"
+"{:toc}"
+msgstr ""
+
+msgid "## Abstract {#abstract}"
+msgstr ""
+
+msgid "The `table_remove` command removes an existing table."
+msgstr ""
+
+msgid ""
+"This is compatible to [the `table_remove` command of the Groonga](http://groon"
+"ga.org/docs/reference/commands/table_remove.html)."
+msgstr ""
+
+msgid "## API types {#api-types}"
+msgstr ""
+
+msgid "### HTTP {#api-types-http}"
+msgstr ""
+
+msgid ""
+"Request endpoint\n"
+": `(Document Root)/d/table_remove`"
+msgstr ""
+
+msgid ""
+"Request methd\n"
+": `GET`"
+msgstr ""
+
+msgid ""
+"Request URL parameters\n"
+": Same to the list of [parameters](#parameters)."
+msgstr ""
+
+msgid ""
+"Request body\n"
+": Nothing."
+msgstr ""
+
+msgid ""
+"Response body\n"
+": A [response message](#response)."
+msgstr ""
+
+msgid "### REST {#api-types-rest}"
+msgstr ""
+
+msgid "Not supported."
+msgstr ""
+
+msgid "### Fluentd {#api-types-fluentd}"
+msgstr ""
+
+msgid ""
+"Style\n"
+": Request-Response. One response message is always returned per one request."
+msgstr ""
+
+msgid ""
+"`type` of the request\n"
+": `table_remove`"
+msgstr ""
+
+msgid ""
+"`body` of the request\n"
+": A hash of [parameters](#parameters)."
+msgstr ""
+
+msgid ""
+"`type` of the response\n"
+": `table_remove.result`"
+msgstr ""
+
+msgid "## Parameter syntax {#syntax}"
+msgstr ""
+
+msgid ""
+"    {\n"
+"      \"name\" : \"<Name of the table>\"\n"
+"    }"
+msgstr ""
+
+msgid "## Parameter details {#parameters}"
+msgstr ""
+
+msgid "The only one parameter `name` is required."
+msgstr ""
+
+msgid ""
+"They are compatible to [the parameters of the `table_remove` command of the Gr"
+"oonga](http://groonga.org/docs/reference/commands/table_remove.html#parameters"
+"). See the linked document for more details."
+msgstr ""
+
+msgid "## Responses {#response}"
+msgstr ""
+
+msgid "This returns an array meaning the result of the operation, as the `body`."
+msgstr ""
+
+msgid ""
+"    [\n"
+"      [\n"
+"        <Groonga's status code>,\n"
+"        <Start time>,\n"
+"        <Elapsed time>\n"
+"      ],\n"
+"      <Table is successfully removed or not>\n"
+"    ]"
+msgstr ""
+
+msgid ""
+"This command always returns a response with `200` as its `statusCode`, because"
+" this is a Groonga compatible command and errors of this command must be handl"
+"ed in the way same to Groonga's one."
+msgstr ""
+
+msgid "Response body's details:"
+msgstr ""
+
+msgid ""
+"Status code\n"
+": An integer which means the operation's result. Possible values are:"
+msgstr ""
+
+msgid ""
+"   * `0` (`Droonga::GroongaHandler::Status::SUCCESS`) : Successfully processed"
+".\n"
+"   * `-22` (`Droonga::GroongaHandler::Status::INVALID_ARGUMENT`) : There is an"
+"y invalid argument."
+msgstr ""
+
+msgid ""
+"Start time\n"
+": An UNIX time which the operation was started on."
+msgstr ""
+
+msgid ""
+"Elapsed time\n"
+": A decimal of seconds meaning the elapsed time for the operation."
+msgstr ""
+
+msgid ""
+"Table is successfully removed or not\n"
+": A boolean value meaning the table was successfully removed or not. Possible "
+"values are:"
+msgstr ""
+
+msgid ""
+"   * `true`:The table was successfully removed.\n"
+"   * `false`:The table was not removed."
+msgstr ""

  Added: _po/ja/reference/1.1.0/http-server/index.po (+260 -0) 100644
===================================================================
--- /dev/null
+++ _po/ja/reference/1.1.0/http-server/index.po    2014-11-30 23:20:40 +0900 (58c6874)
@@ -0,0 +1,260 @@
+msgid ""
+msgstr ""
+"Project-Id-Version: PACKAGE VERSION\n"
+"PO-Revision-Date: 2014-11-30 23:19+0900\n"
+"Language: ja\n"
+"MIME-Version: 1.0\n"
+"Content-Type: text/plain; charset=UTF-8\n"
+"Content-Transfer-Encoding: 8bit\n"
+"Plural-Forms: nplurals=1; plural=0;\n"
+
+msgid ""
+"---\n"
+"title: HTTP Server\n"
+"layout: en\n"
+"---"
+msgstr ""
+
+msgid ""
+"* TOC\n"
+"{:toc}"
+msgstr ""
+
+msgid "## Abstract {#abstract}"
+msgstr ""
+
+msgid ""
+"The [Droonga HTTP Server][droonga-http-server] is as an HTTP protocol adapter "
+"for the Droonga Engine."
+msgstr ""
+
+msgid ""
+"The Droonga Engine supports only the fluentd protocol, so you have to use `flu"
+"ent-cat` or something, to communicate with the Drooga Engine.\n"
+"This application provides ability to communicate with the Droonga Engine via H"
+"TTP."
+msgstr ""
+
+msgid "## Install {#install}"
+msgstr ""
+
+msgid ""
+"It is released as the [droonga-http-server npm module][], a [Node.js][] module"
+" package.\n"
+"You can install it via the `npm` command, like:"
+msgstr ""
+
+msgid "    # npm install -g droonga-http-server"
+msgstr ""
+
+msgid "## Usage {#usage}"
+msgstr ""
+
+msgid "### Command line options {#usage-command}"
+msgstr ""
+
+msgid ""
+"It includes a command `droonga-http-server` to start an HTTP server.\n"
+"You can start it with command line options, like:"
+msgstr ""
+
+msgid "    # droonga-http-server --port 3003"
+msgstr ""
+
+msgid "Available options and their default values are:"
+msgstr ""
+
+msgid ""
+"`--port <13000>`\n"
+": The port number which the server receives HTTP requests at."
+msgstr ""
+
+msgid ""
+"`--receive-host-name <127.0.0.1>`\n"
+": The host name (or the IP address) of the computer itself which the server is"
+" running.\n"
+"  It is used by the Droonga Engine, to send response messages to the protocol "
+"adapter."
+msgstr ""
+
+msgid ""
+"`--droonga-engine-host-name <127.0.0.1>`\n"
+": The host name (or the IP address) of the computer which the Droonga Engine i"
+"s running on."
+msgstr ""
+
+msgid ""
+"`--droonga-engine-port <24224>`\n"
+": The port number which the Droonga Engine receives messages at."
+msgstr ""
+
+msgid ""
+"`--default-dataset <Droonga>`\n"
+": The name of the default dataset.\n"
+"  It is used for requests triggered via built-in HTTP APIs."
+msgstr ""
+
+msgid ""
+"`--tag <droonga>`\n"
+": The tag used for fluentd messages sent to the Droonga Engine."
+msgstr ""
+
+msgid ""
+"`--enable-logging`\n"
+": If you specify this option, log messages are printed to the standard output."
+msgstr ""
+
+msgid ""
+"`--cache-size <100>`\n"
+": The maximum size of the LRU response cache.\n"
+"  Droonga HTTP server caches all responses for GET requests on the RAM, unthil"
+" this size."
+msgstr ""
+
+msgid ""
+"You have to specify appropriate values for your Droonga Engine. For example, i"
+"f the HTTP server is running on the host 192.168.10.90 and the Droonga engine "
+"is running on the host 192.168.10.100 with following configurations:"
+msgstr ""
+
+msgid "fluentd.conf:"
+msgstr ""
+
+msgid ""
+"    <source>\n"
+"      type forward\n"
+"      port 24324\n"
+"    </source>\n"
+"    <match books.message>\n"
+"      name localhost:24224/books\n"
+"      type droonga\n"
+"    </match>\n"
+"    <match output.message>\n"
+"      type stdout\n"
+"    </match>"
+msgstr ""
+
+msgid "catalog.json:"
+msgstr ""
+
+msgid ""
+"    {\n"
+"      \"version\": 2,\n"
+"      \"effectiveDate\": \"2013-09-01T00:00:00Z\",\n"
+"      \"datasets\": {\n"
+"        \"Books\": {\n"
+"          ...\n"
+"        }\n"
+"      }\n"
+"    }"
+msgstr ""
+
+msgid ""
+"Then, you'll start the HTTP server on the host 192.168.10.90, with options lik"
+"e:"
+msgstr ""
+
+msgid ""
+"    # droonga-http-server --receive-host-name 192.168.10.90 \\\n"
+"                          --droonga-engine-host-name 192.168.10.100 \\\n"
+"                          --droonga-engine-port 24324 \\\n"
+"                          --default-dataset Books \\\n"
+"                          --tag books"
+msgstr ""
+
+msgid "See also the [basic tutorial][]."
+msgstr ""
+
+msgid "## Built-in APIs {#usage-api}"
+msgstr ""
+
+msgid "The Droonga HTTP Server includes following APIs:"
+msgstr ""
+
+msgid "### REST API {#usage-rest}"
+msgstr ""
+
+msgid "#### `GET /tables/<table name>` {#usage-rest-get-tables-table}"
+msgstr ""
+
+msgid ""
+"This emits a simple [search request](../commands/search/).\n"
+"The [`source`](../commands/search/#query-source) is filled by the table name i"
+"n the path.\n"
+"Available query parameters are:"
+msgstr ""
+
+msgid ""
+"`attributes`\n"
+": Corresponds to [`output.attributes`](../commands/search/#query-output).\n"
+"  The value is a comma-separated list, like: `attributes=_key,name,age`."
+msgstr ""
+
+msgid ""
+"`query`\n"
+": Corresponds to [`condition.*.query`](../commands/search/#query-condition-que"
+"ry-syntax-hash).\n"
+"  The vlaue is a query string."
+msgstr ""
+
+msgid ""
+"`match_to`\n"
+": Corresponds to [`condition.*.matchTo`](../commands/search/#query-condition-q"
+"uery-syntax-hash).\n"
+"  The vlaue is an comma-separated list, like: `match_to=_key,name`."
+msgstr ""
+
+msgid ""
+"`match_escalation_threshold`\n"
+": Corresponds to [`condition.*.matchEscalationThreshold`](../commands/search/#"
+"query-condition-query-syntax-hash).\n"
+"  The vlaue is an integer."
+msgstr ""
+
+msgid ""
+"`script`\n"
+": Corresponds to [`condition`](../commands/search/#query-condition-query-synta"
+"x-hash) in the script syntax.\n"
+"  If you specity both `query` and `script`, then they work with an `and` logic"
+"al condition."
+msgstr ""
+
+msgid ""
+"`adjusters`\n"
+": Corresponds to `adjusters`."
+msgstr ""
+
+msgid ""
+"`sort_by`\n"
+": Corresponds to [`sortBy`](../commands/search/#query-sortBy).\n"
+"  The value is a column name string."
+msgstr ""
+
+msgid ""
+"`limit`\n"
+": Corresponds to [`output.limit`](../commands/search/#query-output).\n"
+"  The value is an integer."
+msgstr ""
+
+msgid ""
+"`offset`\n"
+": Corresponds to [`output.offset`](../commands/search/#query-output).\n"
+"  The value is an integer."
+msgstr ""
+
+msgid "### Groonga HTTP server compatible API {#usage-groonga}"
+msgstr ""
+
+msgid "#### `GET /d/<command name>` {#usage-groonga-d}"
+msgstr ""
+
+msgid "(TBD)"
+msgstr ""
+
+msgid ""
+"  [basic tutorial]: ../../tutorial/basic/\n"
+"  [droonga-http-server]: https://github.com/droonga/droonga-http-server\n"
+"  [droonga-http-server npm module]: https://npmjs.org/package/droonga-http-ser"
+"ver\n"
+"  [Node.js]: http://nodejs.org/"
+msgstr ""

  Added: _po/ja/reference/1.1.0/index.po (+44 -0) 100644
===================================================================
--- /dev/null
+++ _po/ja/reference/1.1.0/index.po    2014-11-30 23:20:40 +0900 (dc9f747)
@@ -0,0 +1,44 @@
+msgid ""
+msgstr ""
+"Project-Id-Version: PACKAGE VERSION\n"
+"PO-Revision-Date: 2014-11-30 23:19+0900\n"
+"Language: ja\n"
+"MIME-Version: 1.0\n"
+"Content-Type: text/plain; charset=UTF-8\n"
+"Content-Transfer-Encoding: 8bit\n"
+"Plural-Forms: nplurals=1; plural=0;\n"
+
+msgid ""
+"---\n"
+"title: Reference manuals\n"
+"layout: en\n"
+"---"
+msgstr ""
+
+msgid ""
+"[Catalog](catalog/)\n"
+": Describes details of `catalog.json` which defines behavior of the Droonga En"
+"gine."
+msgstr ""
+
+msgid ""
+"[Message format](message/)\n"
+": Describes details of message format flowing in the Droonga Engines."
+msgstr ""
+
+msgid ""
+"[Commands](commands/)\n"
+": Describes details of built-in commands available on the Droonga Engines."
+msgstr ""
+
+msgid ""
+"[HTTP Server](http-server/)\n"
+": Describes usage of the [droonga-http-server](https://github.com/droonga/droo"
+"nga-http-server)."
+msgstr ""
+
+msgid ""
+"[Plugin development](plugin/)\n"
+": Describes details of public APIs to develop custom plugins for the Droonga E"
+"ngine."
+msgstr ""

  Added: _po/ja/reference/1.1.0/message/index.po (+314 -0) 100644
===================================================================
--- /dev/null
+++ _po/ja/reference/1.1.0/message/index.po    2014-11-30 23:20:40 +0900 (120bf63)
@@ -0,0 +1,314 @@
+msgid ""
+msgstr ""
+"Project-Id-Version: PACKAGE VERSION\n"
+"PO-Revision-Date: 2014-11-30 23:19+0900\n"
+"Language: ja\n"
+"MIME-Version: 1.0\n"
+"Content-Type: text/plain; charset=UTF-8\n"
+"Content-Transfer-Encoding: 8bit\n"
+"Plural-Forms: nplurals=1; plural=0;\n"
+
+msgid ""
+"---\n"
+"title: Message format\n"
+"layout: en\n"
+"---"
+msgstr ""
+
+msgid ""
+"* TOC\n"
+"{:toc}"
+msgstr ""
+
+msgid "## Request {#request}"
+msgstr ""
+
+msgid "The basic format of a request message is like following:"
+msgstr ""
+
+msgid ""
+"    {\n"
+"      \"id\"      : \"<ID of the message>\",\n"
+"      \"type\"    : \"<Type of the message>\",\n"
+"      \"replyTo\" : \"<Route to the receiver>\",\n"
+"      \"dataset\" : \"<Name of the target dataset>\",\n"
+"      \"body\"    : <Body of the message>\n"
+"    }"
+msgstr ""
+
+msgid "### `id` {#request-id}"
+msgstr ""
+
+msgid ""
+"Abstract\n"
+": The unique identifier for the message."
+msgstr ""
+
+msgid ""
+"Value\n"
+": An identifier string. You can use any string with any format as you like, if"
+" only it is unique. The given id of a request message will be used for the ['i"
+"nReplyTo`](#response-inReplyTo) information of its response."
+msgstr ""
+
+msgid ""
+"Default value\n"
+": Nothing. This is required information."
+msgstr ""
+
+msgid "### `type` {#request-type}"
+msgstr ""
+
+msgid ""
+"Abstract\n"
+": The type of the message."
+msgstr ""
+
+msgid ""
+"Value\n"
+": A type string of [a command](/reference/commands/)."
+msgstr ""
+
+msgid "### `replyTo` {#request-replyTo}"
+msgstr ""
+
+msgid ""
+"Abstract\n"
+": The route to the response receiver."
+msgstr ""
+
+msgid ""
+"Value\n"
+": An path string in the format: `<hostname>:<port>/<tag>`, for example: `local"
+"host:24224/output`."
+msgstr ""
+
+msgid ""
+"Default value\n"
+": Nothing. This is optional. If you specify no `replyTo`, then the response me"
+"ssage will be thrown away."
+msgstr ""
+
+msgid "### `dataset` {#request-dataset}"
+msgstr ""
+
+msgid ""
+"Abstract\n"
+": The target dataset."
+msgstr ""
+
+msgid ""
+"Value\n"
+": A name string of a dataset."
+msgstr ""
+
+msgid "### `body` {#request-body}"
+msgstr ""
+
+msgid ""
+"Abstract\n"
+": The body of the message."
+msgstr ""
+
+msgid ""
+"Value\n"
+": Object, string, number, boolean, or `null`."
+msgstr ""
+
+msgid ""
+"Default value\n"
+": Nothing. This is optional."
+msgstr ""
+
+msgid "## Response {#response}"
+msgstr ""
+
+msgid "The basic format of a response message is like following:"
+msgstr ""
+
+msgid ""
+"    {\n"
+"      \"type\"       : \"<Type of the message>\",\n"
+"      \"inReplyTo\"  : \"<ID of the related request message>\",\n"
+"      \"statusCode\" : <Status code>,\n"
+"      \"body\"       : <Body of the message>,\n"
+"      \"errors\"     : <Errors from nodes>\n"
+"    }"
+msgstr ""
+
+msgid "### `type` {#response-type}"
+msgstr ""
+
+msgid ""
+"Value\n"
+": A type string. Generally it is a suffixed version of the type string of the "
+"request message, with the suffix \".result\"."
+msgstr ""
+
+msgid "### `inReplyTo` {#response-inReplyTo}"
+msgstr ""
+
+msgid ""
+"Abstract\n"
+": The identifier of the related request message."
+msgstr ""
+
+msgid ""
+"Value\n"
+": An identifier string of the related request message."
+msgstr ""
+
+msgid "### `statusCode` {#response-statusCode}"
+msgstr ""
+
+msgid ""
+"Abstract\n"
+": The result status for the request message."
+msgstr ""
+
+msgid ""
+"Value\n"
+": A status code integer."
+msgstr ""
+
+msgid "Status codes of responses are similar to HTTP's one. Possible values:"
+msgstr ""
+
+msgid ""
+"`200` and other `2xx` statuses\n"
+": The command is successfully processed."
+msgstr ""
+
+msgid "### `body` {#response-body}"
+msgstr ""
+
+msgid ""
+"Abstract\n"
+": The result information for the request message."
+msgstr ""
+
+msgid "### `errors` {#response-errors}"
+msgstr ""
+
+msgid ""
+"Abstract\n"
+": All errors from nodes."
+msgstr ""
+
+msgid ""
+"Value\n"
+": Object."
+msgstr ""
+
+msgid ""
+"This information will appear only when the command is distributed to multiple "
+"volumes and they returned errors. Otherwise, the response message will have no"
+" `errors` field. For more details, see [the \"Error response\" section](#error)."
+msgstr ""
+
+msgid "## Error response {#error}"
+msgstr ""
+
+msgid "Some commands can return an error response."
+msgstr ""
+
+msgid ""
+"An error response has the `type` same to a regular response, but it has differ"
+"ent `statusCode` and `body`. General type of the error is indicated by the `st"
+"atusCode`, and details are reported as the `body`."
+msgstr ""
+
+msgid ""
+"If a command is distributed to multiple volumes and they return errors, then t"
+"he response message will have an `error` field. All errors from all nodes are "
+"stored to the field, like:"
+msgstr ""
+
+msgid ""
+"    {\n"
+"      \"type\"       : \"add.result\",\n"
+"      \"inReplyTo\"  : \"...\",\n"
+"      \"statusCode\" : 400,\n"
+"      \"body\"       : {\n"
+"        \"name\":    \"UnknownTable\",\n"
+"        \"message\": ...\n"
+"      },\n"
+"      \"errors\"     : {\n"
+"        \"/path/to/the/node1\" : {\n"
+"          \"statusCode\" : 400,\n"
+"          \"body\"       : {\n"
+"            \"name\":    \"UnknownTable\",\n"
+"            \"message\": ...\n"
+"          }\n"
+"        },\n"
+"        \"/path/to/the/node2\" : {\n"
+"          \"statusCode\" : 400,\n"
+"          \"body\"       : {\n"
+"            \"name\":    \"UnknownTable\",\n"
+"            \"message\": ...\n"
+"          }\n"
+"        }\n"
+"      }\n"
+"    }"
+msgstr ""
+
+msgid ""
+"In this case, one of all errors will be exported as the main message `body`, a"
+"s a representative."
+msgstr ""
+
+msgid "### Status codes of error responses {#error-status}"
+msgstr ""
+
+msgid "Status codes of error responses are similar to HTTP's one. Possible values:"
+msgstr ""
+
+msgid ""
+"`400` and other `4xx` statuses\n"
+": An error of the request message."
+msgstr ""
+
+msgid ""
+"`500` and other `5xx` statuses\n"
+": An internal error of the Droonga Engine."
+msgstr ""
+
+msgid "### Body of error responses {#error-body}"
+msgstr ""
+
+msgid "The basic format of the body of an error response is like following:"
+msgstr ""
+
+msgid ""
+"    {\n"
+"      \"name\"    : \"<Type of the error>\",\n"
+"      \"message\" : \"<Human readable details of the error>\",\n"
+"      \"detail\"  : <Other extra information for the error, in various formats>\n"
+"    }"
+msgstr ""
+
+msgid "If there is no detail, `detial` can be missing."
+msgstr ""
+
+msgid "#### Error types {#error-type}"
+msgstr ""
+
+msgid "There are some general error types for any command."
+msgstr ""
+
+msgid ""
+"`MissingDatasetParameter`\n"
+": Means you've forgotten to specify the `dataset`. The status code is `400`."
+msgstr ""
+
+msgid ""
+"`UnknownDataset`\n"
+": Means you've specified a dataset which is not existing. The status code is `"
+"404`."
+msgstr ""
+
+msgid ""
+"`UnknownType`\n"
+": Means there is no handler for the command given as the `type`. The status co"
+"de is `400`."
+msgstr ""

  Added: _po/ja/reference/1.1.0/plugin/adapter/index.po (+445 -0) 100644
===================================================================
--- /dev/null
+++ _po/ja/reference/1.1.0/plugin/adapter/index.po    2014-11-30 23:20:40 +0900 (7493581)
@@ -0,0 +1,445 @@
+msgid ""
+msgstr ""
+"Project-Id-Version: PACKAGE VERSION\n"
+"PO-Revision-Date: 2014-11-30 23:19+0900\n"
+"Language: ja\n"
+"MIME-Version: 1.0\n"
+"Content-Type: text/plain; charset=UTF-8\n"
+"Content-Transfer-Encoding: 8bit\n"
+"Plural-Forms: nplurals=1; plural=0;\n"
+
+msgid ""
+"---\n"
+"title: API set for plugins on the adaption phase\n"
+"layout: en\n"
+"---"
+msgstr ""
+
+msgid ""
+"* TOC\n"
+"{:toc}"
+msgstr ""
+
+msgid "## Abstract {#abstract}"
+msgstr ""
+
+msgid ""
+"Each Droonga Engine plugin can have its *adapter*. On the adaption phase, adap"
+"ters can modify both incoming messages (from the Protocol Adapter to the Droon"
+"ga Engine, in other words, they are \"request\"s) and outgoing messages (from th"
+"e Droonga Engine to the Protocol Adapter, in other words, they are \"response\"s"
+")."
+msgstr ""
+
+msgid "### How to define an adapter? {#howto-define}"
+msgstr ""
+
+msgid "For example, here is a sample plugin named \"foo\" with an adapter:"
+msgstr ""
+
+msgid ""
+"~~~ruby\n"
+"require \"droonga/plugin\""
+msgstr ""
+
+msgid ""
+"module Droonga::Plugins::FooPlugin\n"
+"  extend Plugin\n"
+"  register(\"foo\")"
+msgstr ""
+
+msgid ""
+"  class Adapter < Droonga::Adapter\n"
+"    # operations to configure this adapter\n"
+"    XXXXXX = XXXXXX"
+msgstr ""
+
+msgid ""
+"    def adapt_input(input_message)\n"
+"      # operations to modify incoming messages\n"
+"      input_message.XXXXXX = XXXXXX\n"
+"    end"
+msgstr ""
+
+msgid ""
+"    def adapt_output(output_message)\n"
+"      # operations to modify outgoing messages\n"
+"      output_message.XXXXXX = XXXXXX\n"
+"    end\n"
+"  end\n"
+"end\n"
+"~~~"
+msgstr ""
+
+msgid "Steps to define an adapter:"
+msgstr ""
+
+msgid ""
+" 1. Define a module for your plugin (ex. `Droonga::Plugins::FooPlugin`) and re"
+"gister it as a plugin. (required)\n"
+" 2. Define an adapter class (ex. `Droonga::Plugins::FooPlugin::Adapter`) inher"
+"iting [`Droonga::Adapter`](#classes-Droonga-Adapter). (required)\n"
+" 3. [Configure conditions to apply the adapter](#howto-configure). (required)\n"
+" 4. Define adaption logic for incoming messages as [`#adapt_input`](#classes-D"
+"roonga-Adapter-adapt_input). (optional)\n"
+" 5. Define adaption logic for outgoing messages as [`#adapt_output`](#classes-"
+"Droonga-Adapter-adapt_output). (optional)"
+msgstr ""
+
+msgid ""
+"See also the [plugin development tutorial](../../../tutorial/plugin-developmen"
+"t/adapter/)."
+msgstr ""
+
+msgid "### How an adapter works? {#how-works}"
+msgstr ""
+
+msgid "An adapter works like following:"
+msgstr ""
+
+msgid ""
+" 1. The Droonga Engine starts.\n"
+"    * A global instance of the adapter class (ex. `Droonga::Plugins::FooPlugin"
+"::Adapter`) is created and it is registered.\n"
+"      * The input pattern and the output pattern are registered.\n"
+"    * The Droonga Engine starts to wait for incoming messages.\n"
+" 2. An incoming message is transferred from the Protocol Adapter to the Droong"
+"a Engine.\n"
+"    Then, the adaption phase (for an incoming message) starts.\n"
+"    * The adapter's [`#adapt_input`](#classes-Droonga-Adapter-adapt_input) is "
+"called, if the message matches to the [input matching pattern](#config) of the"
+" adapter.\n"
+"    * The method can modify the given incoming message, via [its methods](#cla"
+"sses-Droonga-InputMessage).\n"
+" 3. After all adapters are applied, the adaption phase for an incoming message"
+" ends, and the message is transferred to the next \"planning\" phase.\n"
+" 4. An outgoing message returns from the previous \"collection\" phase.\n"
+"    Then, the adaption phase (for an outgoing message) starts.\n"
+"    * The adapter's [`#adapt_output`](#classes-Droonga-Adapter-adapt_output) i"
+"s called, if the message meets following both requirements:\n"
+"      - It is originated from an incoming message which was processed by the a"
+"dapter itself.\n"
+"      - It matches to the [output matching pattern](#config) of the adapter.\n"
+"    * The method can modify the given outgoing message, via [its methods](#cla"
+"sses-Droonga-OutputMessage).\n"
+" 5. After all adapters are applied, the adaption phase for an outgoing message"
+" ends, and the outgoing message is transferred to the Protocol Adapter."
+msgstr ""
+
+msgid ""
+"As described above, the Droonga Engine creates only one global instance of the"
+" adapter class for each plugin.\n"
+"You should not keep stateful information for a pair of incoming and outgoing m"
+"essages as instance variables of the adapter itself.\n"
+"Instead, you should give stateful information as a part of the incoming messag"
+"e body, and receive it from the body of the corresponding outgoing message."
+msgstr ""
+
+msgid ""
+"Any error raised from the adapter is handled by the Droonga Engine itself. See"
+" also [error handling][]."
+msgstr ""
+
+msgid "## Configurations {#config}"
+msgstr ""
+
+msgid ""
+"`input_message.pattern` ([matching pattern][], optional, default=`nil`)\n"
+": A [matching pattern][] for incoming messages.\n"
+"  If no pattern (`nil`) is given, any message is regarded as \"matched\"."
+msgstr ""
+
+msgid ""
+"`output_message.pattern` ([matching pattern][], optional, default=`nil`)\n"
+": A [matching pattern][] for outgoing messages.\n"
+"  If no pattern (`nil`) is given, any message is regarded as \"matched\"."
+msgstr ""
+
+msgid "## Classes and methods {#classes}"
+msgstr ""
+
+msgid "### `Droonga::Adapter` {#classes-Droonga-Adapter}"
+msgstr ""
+
+msgid ""
+"This is the common base class of any adapter. Your plugin's adapter class must"
+" inherit this."
+msgstr ""
+
+msgid "#### `#adapt_input(input_message)` {#classes-Droonga-Adapter-adapt_input}"
+msgstr ""
+
+msgid ""
+"This method receives a [`Droonga::InputMessage`](#classes-Droonga-InputMessage"
+") wrapped incoming message.\n"
+"You can modify the incoming message via its methods."
+msgstr ""
+
+msgid ""
+"In this base class, this method is defined as just a placeholder and it does n"
+"othing.\n"
+"To modify incoming messages, you have to override it by yours, like following:"
+msgstr ""
+
+msgid ""
+"~~~ruby\n"
+"module Droonga::Plugins::QueryFixer\n"
+"  class Adapter < Droonga::Adapter\n"
+"    def adapt_input(input_message)\n"
+"      input_message.body[\"query\"] = \"fixed query\"\n"
+"    end\n"
+"  end\n"
+"end\n"
+"~~~"
+msgstr ""
+
+msgid "#### `#adapt_output(output_message)` {#classes-Droonga-Adapter-adapt_output}"
+msgstr ""
+
+msgid ""
+"This method receives a [`Droonga::OutputMessage`](#classes-Droonga-OutputMessa"
+"ge) wrapped outgoing message.\n"
+"You can modify the outgoing message via its methods."
+msgstr ""
+
+msgid ""
+"In this base class, this method is defined as just a placeholder and it does n"
+"othing.\n"
+"To modify outgoing messages, you have to override it by yours, like following:"
+msgstr ""
+
+msgid ""
+"~~~ruby\n"
+"module Droonga::Plugins::ErrorConcealer\n"
+"  class Adapter < Droonga::Adapter\n"
+"    def adapt_output(output_message)\n"
+"      output_message.status_code = Droonga::StatusCode::OK\n"
+"    end\n"
+"  end\n"
+"end\n"
+"~~~"
+msgstr ""
+
+msgid "### `Droonga::InputMessage` {#classes-Droonga-InputMessage}"
+msgstr ""
+
+msgid "#### `#type`, `#type=(type)` {#classes-Droonga-InputMessage-type}"
+msgstr ""
+
+msgid "This returns the `\"type\"` of the incoming message."
+msgstr ""
+
+msgid "You can override it by assigning a new string value, like:"
+msgstr ""
+
+msgid ""
+"~~~ruby\n"
+"module Droonga::Plugins::MySearch\n"
+"  class Adapter < Droonga::Adapter\n"
+"    input_message.pattern = [\"type\", :equal, \"my-search\"]"
+msgstr ""
+
+msgid ""
+"    def adapt_input(input_message)\n"
+"      p input_message.type\n"
+"      # => \"my-search\"\n"
+"      #    This message will be handled by a plugin\n"
+"      #    for the custom \"my-search\" type."
+msgstr ""
+
+msgid "      input_message.type = \"search\""
+msgstr ""
+
+msgid ""
+"      p input_message.type\n"
+"      # => \"search\"\n"
+"      #    The messge type (type) is changed.\n"
+"      #    This message will be handled by the \"search\" plugin,\n"
+"      #    as a regular search request.\n"
+"    end\n"
+"  end\n"
+"end\n"
+"~~~"
+msgstr ""
+
+msgid "#### `#body`, `#body=(body)` {#classes-Droonga-InputMessage-body}"
+msgstr ""
+
+msgid "This returns the `\"body\"` of the incoming message."
+msgstr ""
+
+msgid "You can override it by assigning a new value, partially or fully. For example:"
+msgstr ""
+
+msgid ""
+"~~~ruby\n"
+"module Droonga::Plugins::MinimumLimit\n"
+"  class Adapter < Droonga::Adapter\n"
+"    input_message.pattern = [\"type\", :equal, \"search\"]"
+msgstr ""
+
+msgid "    MAXIMUM_LIMIT = 10"
+msgstr ""
+
+msgid ""
+"    def adapt_input(input_message)\n"
+"      input_message.body[\"queries\"].each do |name, query|\n"
+"        query[\"output\"] ||= {}\n"
+"        query[\"output\"][\"limit\"] ||= MAXIMUM_LIMIT\n"
+"        query[\"output\"][\"limit\"] = [query[\"output\"][\"limit\"], MAXIMUM_LIMIT].m"
+"in\n"
+"      end\n"
+"      # Now, all queries have \"output.limit=10\".\n"
+"    end\n"
+"  end\n"
+"end\n"
+"~~~"
+msgstr ""
+
+msgid "Another case:"
+msgstr ""
+
+msgid ""
+"    def adapt_input(input_message)\n"
+"      # Extract the query string from the custom type message.\n"
+"      query_string = input_message[\"body\"][\"query\"]"
+msgstr ""
+
+msgid ""
+"      # Construct internal search request for the \"search\" type.\n"
+"      input_message.type = \"search\"\n"
+"      input_message.body = {\n"
+"        \"queries\" => {\n"
+"          \"source\"    => \"Store\",\n"
+"          \"condition\" => {\n"
+"            \"query\"   => query_string,\n"
+"            \"matchTo\" => [\"name\"],\n"
+"          },\n"
+"          \"output\" => {\n"
+"            \"elements\" => [\"records\"],\n"
+"            \"limit\"    => 10,\n"
+"          },\n"
+"        },\n"
+"      }\n"
+"      # Now, both \"type\" and \"body\" are completely replaced.\n"
+"    end\n"
+"  end\n"
+"end\n"
+"~~~"
+msgstr ""
+
+msgid "### `Droonga::OutputMessage` {#classes-Droonga-OutputMessage}"
+msgstr ""
+
+msgid ""
+"#### `#status_code`, `#status_code=(status_code)` {#classes-Droonga-OutputMess"
+"age-status_code}"
+msgstr ""
+
+msgid "This returns the `\"statusCode\"` of the outgoing message."
+msgstr ""
+
+msgid "You can override it by assigning a new status code. For example:"
+msgstr ""
+
+msgid ""
+"~~~ruby\n"
+"module Droonga::Plugins::ErrorConcealer\n"
+"  class Adapter < Droonga::Adapter\n"
+"    input_message.pattern = [\"type\", :equal, \"search\"]"
+msgstr ""
+
+msgid ""
+"    def adapt_output(output_message)\n"
+"      unless output_message.status_code == StatusCode::InternalServerError\n"
+"        output_message.status_code = Droonga::StatusCode::OK\n"
+"        output_message.body = {}\n"
+"        output_message.errors = nil\n"
+"        # Now any internal server error is ignored and clients\n"
+"        # receive regular responses.\n"
+"      end\n"
+"    end\n"
+"  end\n"
+"end\n"
+"~~~"
+msgstr ""
+
+msgid "#### `#errors`, `#errors=(errors)` {#classes-Droonga-OutputMessage-errors}"
+msgstr ""
+
+msgid "This returns the `\"errors\"` of the outgoing message."
+msgstr ""
+
+msgid ""
+"You can override it by assigning new error information, partially or fully. Fo"
+"r example:"
+msgstr ""
+
+msgid ""
+"~~~ruby\n"
+"module Droonga::Plugins::ErrorExporter\n"
+"  class Adapter < Droonga::Adapter\n"
+"    input_message.pattern = [\"type\", :equal, \"search\"]"
+msgstr ""
+
+msgid ""
+"    def adapt_output(output_message)\n"
+"      output_message.errors.delete(secret_database)\n"
+"      # Delete error information from secret database"
+msgstr ""
+
+msgid ""
+"      output_message.body[\"errors\"] = {\n"
+"        \"records\" => output_message.errors.collect do |database, error|\n"
+"          {\n"
+"            \"database\" => database,\n"
+"            \"error\" => error\n"
+"          }\n"
+"        end,\n"
+"      }\n"
+"      # Convert error informations to a fake search result named \"errors\".\n"
+"    end\n"
+"  end\n"
+"end\n"
+"~~~"
+msgstr ""
+
+msgid "#### `#body`, `#body=(body)` {#classes-Droonga-OutputMessage-body}"
+msgstr ""
+
+msgid "This returns the `\"body\"` of the outgoing message."
+msgstr ""
+
+msgid ""
+"~~~ruby\n"
+"module Droonga::Plugins::SponsoredSearch\n"
+"  class Adapter < Droonga::Adapter\n"
+"    input_message.pattern = [\"type\", :equal, \"search\"]"
+msgstr ""
+
+msgid ""
+"    def adapt_output(output_message)\n"
+"      output_message.body.each do |name, result|\n"
+"        next unless result[\"records\"]\n"
+"        result[\"records\"].unshift(sponsored_entry)\n"
+"      end\n"
+"      # Now all search results include sponsored entry.\n"
+"    end"
+msgstr ""
+
+msgid ""
+"    def sponsored_entry\n"
+"      {\n"
+"        \"title\"=> \"SALE!\",\n"
+"        \"url\"=>   \"http://...\"\n"
+"      }\n"
+"    end\n"
+"  end\n"
+"end\n"
+"~~~"
+msgstr ""
+
+msgid ""
+"  [matching pattern]: ../matching-pattern/\n"
+"  [error handling]: ../error/"
+msgstr ""

  Added: _po/ja/reference/1.1.0/plugin/collector/index.po (+85 -0) 100644
===================================================================
--- /dev/null
+++ _po/ja/reference/1.1.0/plugin/collector/index.po    2014-11-30 23:20:40 +0900 (df6c2de)
@@ -0,0 +1,85 @@
+msgid ""
+msgstr ""
+"Project-Id-Version: PACKAGE VERSION\n"
+"PO-Revision-Date: 2014-11-30 23:19+0900\n"
+"Language: ja\n"
+"MIME-Version: 1.0\n"
+"Content-Type: text/plain; charset=UTF-8\n"
+"Content-Transfer-Encoding: 8bit\n"
+"Plural-Forms: nplurals=1; plural=0;\n"
+
+msgid ""
+"---\n"
+"title: Collector\n"
+"layout: en\n"
+"---"
+msgstr ""
+
+msgid ""
+"* TOC\n"
+"{:toc}"
+msgstr ""
+
+msgid "## Abstract {#abstract}"
+msgstr ""
+
+msgid ""
+"A collector merges two input values to single value.\n"
+"The Droonga Engine tries to collect three or more values by applying the speci"
+"fied collector for two of them again and again."
+msgstr ""
+
+msgid "## Built-in collector classes {#builtin-collectors}"
+msgstr ""
+
+msgid ""
+"There are some pre-defined collector classes used by built-in plugins.\n"
+"Of course they are available for your custom plugins."
+msgstr ""
+
+msgid "### `Droonga::Collectors::And`"
+msgstr ""
+
+msgid ""
+"Returns a result from comparison of two values by the `and` logical operator.\n"
+"If both values are logically equal to `true`, then one of them (it is indeterm"
+"inate) becomes the result."
+msgstr ""
+
+msgid ""
+"Values `null` (`nil`) and `false` are treated as `false`.\n"
+"Otherwise `true`."
+msgstr ""
+
+msgid "### `Droonga::Collectors::Or`"
+msgstr ""
+
+msgid ""
+"Returns a result from comparison of two values by the `or` logical operator.\n"
+"If only one of them is logically equal to `true`, then the value becomes the r"
+"esult.\n"
+"Otherwise, if values are logically same, one of them (it is indeterminate) bec"
+"omes the result."
+msgstr ""
+
+msgid "### `Droonga::Collectors::Sum`"
+msgstr ""
+
+msgid "Returns a summarized value of two input values."
+msgstr ""
+
+msgid "This collector works a little complicatedly."
+msgstr ""
+
+msgid ""
+" * If one of values is equal to `null` (`nil`), then the other value becomes t"
+"he result.\n"
+" * If both values are hash, then a merged hash becomes the result.\n"
+"   * The result hash has all keys of two hashes.\n"
+"     If both have same keys, then one of their values appears as the value of "
+"the key in the reuslt hash.\n"
+"   * It is indeterminate which value becomes the base.\n"
+" * Otherwise the result of `a + b` becomes the result.\n"
+"   * If they are arrays or strings, a concatenated value becomes the result.\n"
+"     It is indeterminate which value becomes the lefthand."
+msgstr ""

  Added: _po/ja/reference/1.1.0/plugin/error/index.po (+119 -0) 100644
===================================================================
--- /dev/null
+++ _po/ja/reference/1.1.0/plugin/error/index.po    2014-11-30 23:20:40 +0900 (74367cc)
@@ -0,0 +1,119 @@
+msgid ""
+msgstr ""
+"Project-Id-Version: PACKAGE VERSION\n"
+"PO-Revision-Date: 2014-11-30 23:19+0900\n"
+"Language: ja\n"
+"MIME-Version: 1.0\n"
+"Content-Type: text/plain; charset=UTF-8\n"
+"Content-Transfer-Encoding: 8bit\n"
+"Plural-Forms: nplurals=1; plural=0;\n"
+
+msgid ""
+"---\n"
+"title: Error handling in plugins\n"
+"layout: en\n"
+"---"
+msgstr ""
+
+msgid ""
+"* TOC\n"
+"{:toc}"
+msgstr ""
+
+msgid "## Abstract {#abstract}"
+msgstr ""
+
+msgid ""
+"Any unhandled error raised from a plugin is returned as an [error response][] "
+"for the corresponding incoming message, with the status code `500` (means \"int"
+"ernal error\")."
+msgstr ""
+
+msgid ""
+"If you want formatted error information to be returned, then rescue errors and"
+" raise your custom errors inheriting `Droonga::ErrorMessage::BadRequest` or `D"
+"roonga::ErrorMessage::InternalServerError` instead of raw errors.\n"
+"(By the way, they are already included to the base class of plugins so you can"
+" define your custom errors easily like: `class CustomError < BadRequest`)"
+msgstr ""
+
+msgid "## Built-in error classes {#builtin-errors}"
+msgstr ""
+
+msgid ""
+"There are some pre-defined error classes used by built-in plugins and the Droo"
+"nga Engine itself."
+msgstr ""
+
+msgid "### `Droonga::ErrorMessage::NotFound`"
+msgstr ""
+
+msgid ""
+"Means an error which the specified resource is not found in the dataset or any"
+" source. For example:"
+msgstr ""
+
+msgid ""
+"    # the second argument means \"details\" of the error. (optional)\n"
+"    raise Droonga::NotFound.new(\"#{name} is not found!\", :elapsed_time => elap"
+"sed_time)"
+msgstr ""
+
+msgid "### `Droonga::ErrorMessage::BadRequest`"
+msgstr ""
+
+msgid ""
+"Means any error originated from the incoming message itself, ex. syntax error,"
+" validation error, and so on. For example:"
+msgstr ""
+
+msgid ""
+"    # the second argument means \"details\" of the error. (optional)\n"
+"    raise Droonga::NotFound.new(\"Syntax error in #{query}!\", :detail => detail"
+")"
+msgstr ""
+
+msgid "### `Droonga::ErrorMessage::InternalServerError`"
+msgstr ""
+
+msgid ""
+"Means other unknown error, ex. timed out, file I/O error, and so on. For examp"
+"le:"
+msgstr ""
+
+msgid ""
+"    # the second argument means \"details\" of the error. (optional)\n"
+"    raise Droonga::MessageProcessingError.new(\"busy!\", :elapsed_time => elapse"
+"d_time)"
+msgstr ""
+
+msgid "## Built-in status codes {#builtin-status-codes}"
+msgstr ""
+
+msgid ""
+"You should use following or other status codes as [a matter of principle](../."
+"./message/#error-status)."
+msgstr ""
+
+msgid ""
+"`Droonga::StatusCode::OK`\n"
+": Equals to `200`."
+msgstr ""
+
+msgid ""
+"`Droonga::StatusCode::NOT_FOUND`\n"
+": Equals to `404`."
+msgstr ""
+
+msgid ""
+"`Droonga::StatusCode::BAD_REQUEST`\n"
+": Equals to `400`."
+msgstr ""
+
+msgid ""
+"`Droonga::StatusCode::INTERNAL_ERROR`\n"
+": Equals to `500`."
+msgstr ""
+
+msgid "  [error response]: ../../message/#error"
+msgstr ""

  Added: _po/ja/reference/1.1.0/plugin/handler/index.po (+370 -0) 100644
===================================================================
--- /dev/null
+++ _po/ja/reference/1.1.0/plugin/handler/index.po    2014-11-30 23:20:40 +0900 (13a37a1)
@@ -0,0 +1,370 @@
+msgid ""
+msgstr ""
+"Project-Id-Version: PACKAGE VERSION\n"
+"PO-Revision-Date: 2014-11-30 23:19+0900\n"
+"Language: ja\n"
+"MIME-Version: 1.0\n"
+"Content-Type: text/plain; charset=UTF-8\n"
+"Content-Transfer-Encoding: 8bit\n"
+"Plural-Forms: nplurals=1; plural=0;\n"
+
+msgid ""
+"---\n"
+"title: API set for plugins on the handling phase\n"
+"layout: en\n"
+"---"
+msgstr ""
+
+msgid ""
+"* TOC\n"
+"{:toc}"
+msgstr ""
+
+msgid "## Abstract {#abstract}"
+msgstr ""
+
+msgid ""
+"Each Droonga Engine plugin can have its *handler*.\n"
+"On the handling phase, handlers can process a request and return a result."
+msgstr ""
+
+msgid "### How to define a handler? {#howto-define}"
+msgstr ""
+
+msgid "For example, here is a sample plugin named \"foo\" with a handler:"
+msgstr ""
+
+msgid ""
+"~~~ruby\n"
+"require \"droonga/plugin\""
+msgstr ""
+
+msgid ""
+"module Droonga::Plugins::FooPlugin\n"
+"  extend Plugin\n"
+"  register(\"foo\")"
+msgstr ""
+
+msgid ""
+"  define_single_step do |step|\n"
+"    step.name = \"foo\"\n"
+"    step.handler = :Handler\n"
+"    step.collector = Collectors::And\n"
+"  end"
+msgstr ""
+
+msgid ""
+"  class Handler < Droonga::Handler\n"
+"    def handle(message)\n"
+"      # operations to process a request\n"
+"    end\n"
+"  end\n"
+"end\n"
+"~~~"
+msgstr ""
+
+msgid "Steps to define a handler:"
+msgstr ""
+
+msgid ""
+" 1. Define a module for your plugin (ex. `Droonga::Plugins::FooPlugin`) and re"
+"gister it as a plugin. (required)\n"
+" 2. Define a \"single step\" corresponding to the handler you are going to imple"
+"ment, via [`Droonga::SingleStepDefinition`](#class-Droonga-SingleStepDefinitio"
+"n). (required)\n"
+" 3. Define a handler class (ex. `Droonga::Plugins::FooPlugin::Handler`) inheri"
+"ting [`Droonga::Handler`](#classes-Droonga-Handler). (required)\n"
+" 4. Define handling logic for requests as [`#handle`](#classes-Droonga-Handler"
+"-handle). (optional)"
+msgstr ""
+
+msgid ""
+"See also the [plugin development tutorial](../../../tutorial/plugin-developmen"
+"t/handler/)."
+msgstr ""
+
+msgid "### How a handler works? {#how-works}"
+msgstr ""
+
+msgid "A handler works like following:"
+msgstr ""
+
+msgid ""
+" 1. The Droonga Engine starts.\n"
+"    * Your custom steps are registered.\n"
+"      Your custom handler classes also.\n"
+"    * Then the Droonga Engine starts to wait for request messages.\n"
+" 2. A request message is transferred from the adaption phase.\n"
+"    Then, the processing phase starts.\n"
+"    * The Droonga Engine finds a step definition from the message type.\n"
+"    * The Droonga Engine builds a \"single step\" based on the registered defini"
+"tion.\n"
+"    * A \"single step\" creates an instance of the registered handler class.\n"
+"      Then the Droonga Engine enters to the handling phase.\n"
+"      * The handler's [`#handle`](#classes-Droonga-Handler-handle) is called w"
+"ith a task massage including the request.\n"
+"        * The method can process the given incoming message as you like.\n"
+"        * The method returns a result value, as the output.\n"
+"      * After the handler finishes, the handling phase for the task message (a"
+"nd the request) ends.\n"
+"    * If no \"step\" is found for the type, nothing happens.\n"
+"    * All \"step\"s finish their task, the processing phase for the request ends"
+"."
+msgstr ""
+
+msgid ""
+"As described above, the Droonga Engine creates an instance of the handler clas"
+"s for each request."
+msgstr ""
+
+msgid ""
+"Any error raised from the handler is handled by the Droonga Engine itself. See"
+" also [error handling][]."
+msgstr ""
+
+msgid "## Configurations {#config}"
+msgstr ""
+
+msgid ""
+"`action.synchronous` (boolean, optional, default=`false`)\n"
+": Indicates that the request must be processed synchronously.\n"
+"  For example, a request to define a new column in a table must be processed a"
+"fter a request to define the table itself, if the table does not exist yet.\n"
+"  Then handlers for these requests have the configuration `action.synchronous "
+"= true`."
+msgstr ""
+
+msgid "## Classes and methods {#classes}"
+msgstr ""
+
+msgid "### `Droonga::SingleStepDefinition` {#classes-Droonga-SingleStepDefinition}"
+msgstr ""
+
+msgid "This provides methods to describe the \"step\" corresponding to the handler."
+msgstr ""
+
+msgid "#### `#name`, `#name=(name)` {#classes-Droonga-SingleStepDefinition-name}"
+msgstr ""
+
+msgid ""
+"Describes the name of the step itself.\n"
+"Possible value is a string."
+msgstr ""
+
+msgid ""
+"The Droonga Engine treats an incoming message as a request of a \"command\", if "
+"there is any step with the `name` which equals to the message's `type`.\n"
+"In other words, this defines the name of the command corresponding to the step"
+" itself."
+msgstr ""
+
+msgid ""
+"#### `#handler`, `#handler=(handler)` {#classes-Droonga-SingleStepDefinition-h"
+"andler}"
+msgstr ""
+
+msgid ""
+"Associates a specific handler class to the step itself.\n"
+"You can specify the class as any one of following choices:"
+msgstr ""
+
+msgid ""
+" * A reference to a handler class itself, like `Handler` or `Droonga::Plugins:"
+":FooPlugin::Handler`.\n"
+"   Of course, the class have to be already defined at the time.\n"
+" * A symbol which refers the name of a handler class in the current namespace,"
+" like `:Handler`.\n"
+"   This is useful if you want to describe the step at first and define the act"
+"ual class after that.\n"
+" * A class path string of a handler class, like `\"Droonga::Plugins::FooPlugin:"
+":Handler\"`.\n"
+"   This is also useful to define the class itself after the description."
+msgstr ""
+
+msgid ""
+"You must define the referenced class by the time the Droonga Engine actually p"
+"rocesses the step, if you specify the name of the handler class as a symbol or"
+" a string.\n"
+"If the Droonga Engine fails to find out the actual handler class, or no handle"
+"r is specified, then the Droonga Engine does nothing for the request."
+msgstr ""
+
+msgid ""
+"#### `#collector`, `#collector=(collector)` {#classes-Droonga-SingleStepDefini"
+"tion-collector}"
+msgstr ""
+
+msgid ""
+"Associates a specific collector class to the step itself.\n"
+"You can specify the class as any one of following choices:"
+msgstr ""
+
+msgid ""
+" * A reference to a collector class itself, like `Collectors::Something` or `D"
+"roonga::Plugins::FooPlugin::MyCollector`.\n"
+"   Of course, the class have to be already defined at the time.\n"
+" * A symbol which refers the name of a collector class in the current namespac"
+"e, like `:MyCollector`.\n"
+"   This is useful if you want to describe the step at first and define the act"
+"ual class after that.\n"
+" * A class path string of a collector class, like `\"Droonga::Plugins::FooPlugi"
+"n::MyCollector\"`.\n"
+"   This is also useful to define the class itself after the description."
+msgstr ""
+
+msgid ""
+"You must define the referenced class by the time the Droonga Engine actually c"
+"ollects results, if you specify the name of the collector class as a symbol or"
+" a string.\n"
+"If the Droonga Engine fails to find out the actual collector class, or no coll"
+"ector is specified, then the Droonga Engine doesn't collect results and return"
+"s multiple messages as results."
+msgstr ""
+
+msgid "See also [descriptions of collectors][collector]."
+msgstr ""
+
+msgid "#### `#write`, `#write=(write)` {#classes-Droonga-SingleStepDefinition-write}"
+msgstr ""
+
+msgid ""
+"Describes whether the step modifies any data in the storage or don't.\n"
+"If a request aims to modify some data in the storage, the request must be proc"
+"essed for all replicas.\n"
+"Otherwise the Droonga Engine can optimize handling of the step.\n"
+"For example, caching of results, reducing of CPU/memory usage, and so on."
+msgstr ""
+
+msgid "Possible values are:"
+msgstr ""
+
+msgid ""
+" * `true`, means \"this step can modify the storage.\"\n"
+" * `false`, means \"this step never modifies the storage.\" (default)"
+msgstr ""
+
+msgid ""
+"#### `#inputs`, `#inputs=(inputs)` {#classes-Droonga-SingleStepDefinition-inpu"
+"ts}"
+msgstr ""
+
+msgid "(TBD)"
+msgstr ""
+
+msgid ""
+"#### `#output`, `#output=(output)` {#classes-Droonga-SingleStepDefinition-outp"
+"ut}"
+msgstr ""
+
+msgid "### `Droonga::Handler` {#classes-Droonga-Handler}"
+msgstr ""
+
+msgid ""
+"This is the common base class of any handler.\n"
+"Your plugin's handler class must inherit this."
+msgstr ""
+
+msgid "#### `#handle(message)` {#classes-Droonga-Handler-handle}"
+msgstr ""
+
+msgid ""
+"This method receives a [`Droonga::HandlerMessage`](#classes-Droonga-HandlerMes"
+"sage) wrapped task message.\n"
+"You can read the request information via its methods."
+msgstr ""
+
+msgid ""
+"In this base class, this method is defined as just a placeholder and it does n"
+"othing.\n"
+"To process messages, you have to override it by yours, like following:"
+msgstr ""
+
+msgid ""
+"~~~ruby\n"
+"module Droonga::Plugins::MySearch\n"
+"  class Handler < Droonga::Handler\n"
+"    def handle(message)\n"
+"      search_query = message.request[\"body\"][\"query\"]\n"
+"      ...\n"
+"      { ... } # the result\n"
+"    end\n"
+"  end\n"
+"end\n"
+"~~~"
+msgstr ""
+
+msgid ""
+"The Droonga Engine uses the returned value of this method as the result of the"
+" handling.\n"
+"It will be used to build the body of the unified response, and delivered to th"
+"e Protocol Adapter."
+msgstr ""
+
+msgid "### `Droonga::HandlerMessage` {#classes-Droonga-HandlerMessage}"
+msgstr ""
+
+msgid "This is a wrapper for a task message."
+msgstr ""
+
+msgid ""
+"The Droonga Engine analyzes a transferred request message, and build multiple "
+"task massages to process the request.\n"
+"A task massage has some information: a request, a step, descendant tasks, and "
+"so on."
+msgstr ""
+
+msgid "#### `#request` {#classes-Droonga-HandlerMessage-request}"
+msgstr ""
+
+msgid ""
+"This returns the request message.\n"
+"You can read request body via this method. For example:"
+msgstr ""
+
+msgid ""
+"~~~ruby\n"
+"module Droonga::Plugins::MySearch\n"
+"  class Handler < Droonga::Handler\n"
+"    def handle(message)\n"
+"      request = message.request\n"
+"      search_query = request[\"body\"][\"query\"]\n"
+"      ...\n"
+"    end\n"
+"  end\n"
+"end\n"
+"~~~"
+msgstr ""
+
+msgid "#### `@context` {#classes-Droonga-HandlerMessage-context}"
+msgstr ""
+
+msgid ""
+"This is a reference to the `Groonga::Context` instance for the storage of the "
+"corresponding volume.\n"
+"See the [class reference of Rroonga][Groonga::Context]."
+msgstr ""
+
+msgid ""
+"You can use any feature of Rroonga via `@context`.\n"
+"For example, this code returns the number of records in the specified table:"
+msgstr ""
+
+msgid ""
+"~~~ruby\n"
+"module Droonga::Plugins::CountRecords\n"
+"  class Handler < Droonga::Handler\n"
+"    def handle(message)\n"
+"      request = message.request\n"
+"      table_name = request[\"body\"][\"table\"]\n"
+"      count = @context[table_name].size\n"
+"    end\n"
+"  end\n"
+"end\n"
+"~~~"
+msgstr ""
+
+msgid ""
+"  [error handling]: ../error/\n"
+"  [collector]: ../collector/\n"
+"  [Groonga::Context]: http://ranguba.org/rroonga/en/Groonga/Context.html"
+msgstr ""

  Added: _po/ja/reference/1.1.0/plugin/index.po (+30 -0) 100644
===================================================================
--- /dev/null
+++ _po/ja/reference/1.1.0/plugin/index.po    2014-11-30 23:20:40 +0900 (2cb9d27)
@@ -0,0 +1,30 @@
+msgid ""
+msgstr ""
+"Project-Id-Version: PACKAGE VERSION\n"
+"PO-Revision-Date: 2014-11-30 23:19+0900\n"
+"Language: ja\n"
+"MIME-Version: 1.0\n"
+"Content-Type: text/plain; charset=UTF-8\n"
+"Content-Transfer-Encoding: 8bit\n"
+"Plural-Forms: nplurals=1; plural=0;\n"
+
+msgid ""
+"---\n"
+"title: Plugin development\n"
+"layout: en\n"
+"---"
+msgstr ""
+
+msgid ""
+"Droonga Engine has different API sets for plugins, on each phase.\n"
+"See also the [plugin development tutorial](../../tutorial/plugin-development/)"
+"."
+msgstr ""
+
+msgid ""
+" * [API set for the adaption phase](adapter/)\n"
+" * [API set for the handling phase](handler/)\n"
+" * [Matching pattern for messages](matching-pattern/)\n"
+" * [Collector](collector/)\n"
+" * [Error handling](error/)"
+msgstr ""

  Added: _po/ja/reference/1.1.0/plugin/matching-pattern/index.po (+329 -0) 100644
===================================================================
--- /dev/null
+++ _po/ja/reference/1.1.0/plugin/matching-pattern/index.po    2014-11-30 23:20:40 +0900 (9c98b20)
@@ -0,0 +1,329 @@
+msgid ""
+msgstr ""
+"Project-Id-Version: PACKAGE VERSION\n"
+"PO-Revision-Date: 2014-11-30 23:19+0900\n"
+"Language: ja\n"
+"MIME-Version: 1.0\n"
+"Content-Type: text/plain; charset=UTF-8\n"
+"Content-Transfer-Encoding: 8bit\n"
+"Plural-Forms: nplurals=1; plural=0;\n"
+
+msgid ""
+"---\n"
+"title: Matching pattern for messages\n"
+"layout: en\n"
+"---"
+msgstr ""
+
+msgid ""
+"* TOC\n"
+"{:toc}"
+msgstr ""
+
+msgid "## Abstract {#abstract}"
+msgstr ""
+
+msgid ""
+"The Droonga Engine provides a tiny language to specify patterns of messages, c"
+"alled *matching pattern*.\n"
+"It is used to specify target messages of various operations, ex. plugins."
+msgstr ""
+
+msgid "## Examples {#examples}"
+msgstr ""
+
+msgid "### Simple matching"
+msgstr ""
+
+msgid "    pattern = [\"type\", :equal, \"search\"]"
+msgstr ""
+
+msgid "This matches to messages like:"
+msgstr ""
+
+msgid ""
+"    {\n"
+"      \"type\": \"search\",\n"
+"      ...\n"
+"    }"
+msgstr ""
+
+msgid "### Matching for a deep target"
+msgstr ""
+
+msgid "    pattern = [\"body.success\", :equal, true]"
+msgstr ""
+
+msgid ""
+"    {\n"
+"      \"type\": \"add.result\",\n"
+"      \"body\": {\n"
+"        \"success\": true\n"
+"      }\n"
+"    }"
+msgstr ""
+
+msgid "Doesn't match to:"
+msgstr ""
+
+msgid ""
+"    {\n"
+"      \"type\": \"add.result\",\n"
+"      \"body\": {\n"
+"        \"success\": false\n"
+"      }\n"
+"    }"
+msgstr ""
+
+msgid "### Nested patterns"
+msgstr ""
+
+msgid ""
+"    pattern = [\n"
+"                 [\"type\", :equal, \"table_create\"],\n"
+"                 :or,\n"
+"                 [\"body.success\", :equal, true]\n"
+"              ]"
+msgstr ""
+
+msgid "This matches to both:"
+msgstr ""
+
+msgid ""
+"    {\n"
+"      \"type\": \"table_create\",\n"
+"      ...\n"
+"    }"
+msgstr ""
+
+msgid "and:"
+msgstr ""
+
+msgid ""
+"    {\n"
+"      \"type\": \"column_create\",\n"
+"      ...\n"
+"      \"body\": {\n"
+"        \"success\": true\n"
+"      }\n"
+"    }"
+msgstr ""
+
+msgid "## Syntax {#syntax}"
+msgstr ""
+
+msgid ""
+"There are two typeos of matching patterns: \"basic pattern\" and \"nested pattern"
+"\"."
+msgstr ""
+
+msgid "### Basic pattern {#syntax-basic}"
+msgstr ""
+
+msgid "#### Structure {#syntax-basic-structure}"
+msgstr ""
+
+msgid ""
+"A basic pattern is described as an array including 2 or more elements, like fo"
+"llowing:"
+msgstr ""
+
+msgid "    [\"type\", :equal, \"search\"]"
+msgstr ""
+
+msgid ""
+" * The first element is a *target path*. It means the location of the informat"
+"ion to be checked, in the [message][].\n"
+" * The second element is an *operator*. It means how the information specified"
+" by the target path should be checked.\n"
+" * The third element is an *argument for the oeprator*. It is a primitive valu"
+"e (string, numeric, or boolean) or an array of values. Some operators require "
+"no argument."
+msgstr ""
+
+msgid "#### Target path {#syntax-basic-target-path}"
+msgstr ""
+
+msgid "The target path is specified as a string, like:"
+msgstr ""
+
+msgid "    \"body.success\""
+msgstr ""
+
+msgid ""
+"The matching mechanism of the Droonga Engine interprets it as a dot-separated "
+"list of *path components*.\n"
+"A path component represents the property in the message with same name.\n"
+"So, the example above means the location:"
+msgstr ""
+
+msgid ""
+"    {\n"
+"      \"body\": {\n"
+"        \"success\": <target>\n"
+"      }\n"
+"    }"
+msgstr ""
+
+msgid "#### Avialable operators {#syntax-basic-operators}"
+msgstr ""
+
+msgid "The operator is specified as a symbol."
+msgstr ""
+
+msgid ""
+"`:equal`\n"
+": Returns `true`, if the target value is equal to the given value. Otherwise `"
+"false`.\n"
+"  For example,"
+msgstr ""
+
+msgid "      [\"type\", :equal, \"search\"]"
+msgstr ""
+
+msgid "  The pattern above matches to a message like following:"
+msgstr ""
+
+msgid ""
+"      {\n"
+"        \"type\": \"search\",\n"
+"        ...\n"
+"      }"
+msgstr ""
+
+msgid ""
+"`:in`\n"
+": Returns `true`, if the target value is in the given array of values. Otherwi"
+"se `false`.\n"
+"  For example,"
+msgstr ""
+
+msgid "      [\"type\", :in, [\"search\", \"select\"]]"
+msgstr ""
+
+msgid ""
+"      {\n"
+"        \"type\": \"select\",\n"
+"        ...\n"
+"      }"
+msgstr ""
+
+msgid "  But it doesn't match to:"
+msgstr ""
+
+msgid ""
+"      {\n"
+"        \"type\": \"find\",\n"
+"        ...\n"
+"      }"
+msgstr ""
+
+msgid ""
+"`:include`\n"
+": Returns `true` if the target array of values includes the given value. Other"
+"wise `false`.\n"
+"  In other words, this is the opposite of the `:in` operator.\n"
+"  For example,"
+msgstr ""
+
+msgid "      [\"body.tags\", :include, \"News\"]"
+msgstr ""
+
+msgid ""
+"      {\n"
+"        \"type\": \"my.notification\",\n"
+"        \"body\": {\n"
+"          \"tags\": [\"News\", \"Groonga\", \"Droonga\", \"Fluentd\"]\n"
+"        }\n"
+"      }"
+msgstr ""
+
+msgid ""
+"`:exist`\n"
+": Returns `true` if the target exists. Otherwise `false`.\n"
+"  For example,"
+msgstr ""
+
+msgid "      [\"body.comments\", :exist, \"News\"]"
+msgstr ""
+
+msgid ""
+"      {\n"
+"        \"type\": \"my.notification\",\n"
+"        \"body\": {\n"
+"          \"title\": \"Hello!\",\n"
+"          \"comments\": []\n"
+"        }\n"
+"      }"
+msgstr ""
+
+msgid ""
+"      {\n"
+"        \"type\": \"my.notification\",\n"
+"        \"body\": {\n"
+"          \"title\": \"Hello!\"\n"
+"        }\n"
+"      }"
+msgstr ""
+
+msgid ""
+"`:start_with`\n"
+": Returns `true` if the target string value starts with the given string. Othe"
+"rwise `false`.\n"
+"  For example,"
+msgstr ""
+
+msgid "      [\"body.path\", :start_with, \"/archive/\"]"
+msgstr ""
+
+msgid ""
+"      {\n"
+"        \"type\": \"my.notification\",\n"
+"        \"body\": {\n"
+"          \"path\": \"/archive/2014/02/28.html\"\n"
+"        }\n"
+"      }"
+msgstr ""
+
+msgid "### Nested pattern {#syntax-nested}"
+msgstr ""
+
+msgid "#### Structure {#syntax-nested-structure}"
+msgstr ""
+
+msgid ""
+"A nested pattern is described as an array including 3 elements, like following"
+":"
+msgstr ""
+
+msgid ""
+"    [\n"
+"      [\"type\", :equal, \"table_create\"],\n"
+"      :or,\n"
+"      [\"type\", :equal, \"column_create\"]\n"
+"    ]"
+msgstr ""
+
+msgid ""
+" * The first and the third elements are patterns, basic or nested. (In other w"
+"ords, you can nest patterns recursively.)\n"
+" * The second element is a *logical operator*."
+msgstr ""
+
+msgid "#### Avialable operators {#syntax-nested-operators}"
+msgstr ""
+
+msgid ""
+"`:and`\n"
+": Returns `true` if both given patterns are evaluated as `true`. Otherwise `fa"
+"lse`."
+msgstr ""
+
+msgid ""
+"`:or`\n"
+": Returns `true` if one of given patterns (the first or the third element) is "
+"evaluated as `true`. Otherwise `false`."
+msgstr ""
+
+msgid "  [message]:../../message/"
+msgstr ""

  Added: _po/ja/tutorial/1.1.0/add-replica/index.po (+524 -0) 100644
===================================================================
--- /dev/null
+++ _po/ja/tutorial/1.1.0/add-replica/index.po    2014-11-30 23:20:40 +0900 (94243a2)
@@ -0,0 +1,524 @@
+msgid ""
+msgstr ""
+"Project-Id-Version: PACKAGE VERSION\n"
+"PO-Revision-Date: 2014-11-30 23:19+0900\n"
+"Language: ja\n"
+"MIME-Version: 1.0\n"
+"Content-Type: text/plain; charset=UTF-8\n"
+"Content-Transfer-Encoding: 8bit\n"
+"Plural-Forms: nplurals=1; plural=0;\n"
+
+msgid ""
+"---\n"
+"title: \"Droonga tutorial: How to add a new replica to an existing cluster?\"\n"
+"layout: en\n"
+"---"
+msgstr ""
+
+msgid ""
+"* TOC\n"
+"{:toc}"
+msgstr ""
+
+msgid "## The goal of this tutorial"
+msgstr ""
+
+msgid ""
+"Learning steps to add a new replica node, remove an existing replica, and repl"
+"ace a replica with new one, for your existing [Droonga][] cluster."
+msgstr ""
+
+msgid "## Precondition"
+msgstr ""
+
+msgid ""
+"* You must have an existing Droonga cluster with some data.\n"
+"  Please complete the [\"getting started\" tutorial](../groonga/) before this.\n"
+"* You must know how to duplicate data between multiple clusters.\n"
+"  Please complete the [\"How to backup and restore the database?\" tutorial](../"
+"dump-restore/) before this."
+msgstr ""
+
+msgid ""
+"This tutorial assumes that there are two existing Droonga nodes prepared by th"
+"e [first tutorial](../groonga/): `node0` (`192.168.100.50`) and `node1` (`192."
+"168.100.51`), and there is another computer `node2` (`192.168.100.52`) for a n"
+"ew node.\n"
+"If you have Droonga nodes with other names, read `node0`, `node1` and `node2` "
+"in following descriptions as yours."
+msgstr ""
+
+msgid "## What's \"replica\"?"
+msgstr ""
+
+msgid "There are two axes, \"replica\" and \"slice\", for Droonga nodes."
+msgstr ""
+
+msgid ""
+"All \"replica\" nodes have completely equal data, so they can process your reque"
+"sts (ex. \"search\") parallelly.\n"
+"You can increase the capacity of your cluster to process increasing requests, "
+"by adding new replicas."
+msgstr ""
+
+msgid ""
+"On the other hand, \"slice\" nodes have different data, for example, one node co"
+"ntains data of the year 2013, another has data of 2014.\n"
+"You can increase the capacity of your cluster to store increasing data, by add"
+"ing new slices."
+msgstr ""
+
+msgid ""
+"Currently, for a Droonga cluster which is configured as a Groonga compatible s"
+"ystem, only replicas can be added, but slices cannot be done.\n"
+"We'll improve extensibility for slices in the future."
+msgstr ""
+
+msgid ""
+"Anyway, this tutorial explains how to add a new replica node to an existing Dr"
+"oogna cluster.\n"
+"Here we go!"
+msgstr ""
+
+msgid "## Add a new replica node to an existing cluster"
+msgstr ""
+
+msgid ""
+"In this case you don't have to stop the cluster working, for any read-only req"
+"uests like \"search\".\n"
+"You can add a new replica, in the backstage, without downing your service."
+msgstr ""
+
+msgid ""
+"On the other hand, you have to stop inpouring of new data to the cluster until"
+" the new node starts working.\n"
+"(In the future we'll provide mechanism to add new nodes completely silently wi"
+"thout any stopping of data-flow, but currently can't.)"
+msgstr ""
+
+msgid ""
+"Assume that there is a Droonga cluster constructed with two replica nodes `nod"
+"e0` and `node1`, and we are going to add a new replica node `node2`."
+msgstr ""
+
+msgid "### Setup a new node"
+msgstr ""
+
+msgid "First, prepare a new computer, install required softwares and configure them."
+msgstr ""
+
+msgid ""
+"~~~\n"
+"(on node2)\n"
+"# curl https://raw.githubusercontent.com/droonga/droonga-engine/master/install"
+".sh | \\\n"
+"    HOST=node2 bash\n"
+"# curl https://raw.githubusercontent.com/droonga/droonga-http-server/master/in"
+"stall.sh | \\\n"
+"    ENGINE_HOST=node2 HOST=node2 bash\n"
+"~~~"
+msgstr ""
+
+msgid ""
+"Note, you cannot add a non-empty node to an existing cluster.\n"
+"If the computer was used as a Droonga node in old days, then you must clear ol"
+"d data at first."
+msgstr ""
+
+msgid ""
+"~~~\n"
+"(on node2)\n"
+"# droonga-engine-configure --quiet \\\n"
+"                           --clear --reset-config --reset-catalog \\\n"
+"                           --host=node2\n"
+"# droonga-http-server-configure --quiet --reset-config \\\n"
+"                                --droonga-engine-host-name=node2 \\\n"
+"                                --receive-host-name=node2\n"
+"~~~"
+msgstr ""
+
+msgid "Let's start services."
+msgstr ""
+
+msgid ""
+"~~~\n"
+"(on node2)\n"
+"# service droonga-engine start\n"
+"# service droonga-http-server start\n"
+"~~~"
+msgstr ""
+
+msgid ""
+"Currently, the new node doesn't work as a node of the existing cluster.\n"
+"You can confirm that, via the `system.status` command:"
+msgstr ""
+
+msgid ""
+"~~~\n"
+"$ curl \"http://node0:10041/droonga/system/status\" | jq \".\"\n"
+"{\n"
+"  \"nodes\": {\n"
+"    \"node0:10031/droonga\": {\n"
+"      \"live\": true\n"
+"    },\n"
+"    \"node1:10031/droonga\": {\n"
+"      \"live\": true\n"
+"    }\n"
+"  }\n"
+"}\n"
+"$ curl \"http://node1:10041/droonga/system/status\" | jq \".\"\n"
+"{\n"
+"  \"nodes\": {\n"
+"    \"node0:10031/droonga\": {\n"
+"      \"live\": true\n"
+"    },\n"
+"    \"node1:10031/droonga\": {\n"
+"      \"live\": true\n"
+"    }\n"
+"  }\n"
+"}\n"
+"$ curl \"http://node2:10041/droonga/system/status\" | jq \".\"\n"
+"{\n"
+"  \"nodes\": {\n"
+"    \"node2:10031/droonga\": {\n"
+"      \"live\": true\n"
+"    }\n"
+"  }\n"
+"}\n"
+"~~~"
+msgstr ""
+
+msgid "### Suspend inpouring of \"write\" requests"
+msgstr ""
+
+msgid ""
+"Before starting to change cluster composition, you must suspend inpouring of \""
+"write\" requests to the cluster, because we have to synchronize data to the new"
+" replica.\n"
+"Otherwise, the new added replica will contain incomplete data and results for "
+"requests to the cluster become unstable."
+msgstr ""
+
+msgid ""
+"What's \"write\" request?\n"
+"In particular, these commands modify data in the cluster:"
+msgstr ""
+
+msgid ""
+" * `add`\n"
+" * `column_create`\n"
+" * `column_remove`\n"
+" * `delete`\n"
+" * `load`\n"
+" * `table_create`\n"
+" * `table_remove`"
+msgstr ""
+
+msgid ""
+"If you load new data via the `load` command triggered by a batch script starte"
+"d as a cronjob, disable the job.\n"
+"If a crawler agent adds new data via the `add` command, stop it.\n"
+"If you put a fluentd as a buffer between crawler or loader and the cluster, st"
+"op outgoing messages from the buffer."
+msgstr ""
+
+msgid ""
+"If you are reading this tutorial sequentially after the [previous topic](../du"
+"mp-restore/), there is no incoming requests, so you have nothing to do."
+msgstr ""
+
+msgid "### Joining a new replica node to the cluster"
+msgstr ""
+
+msgid ""
+"To add a new replica node to an existing cluster, you just run a command `droo"
+"nga-engine-join` on one of existing replica nodes or the new replica node, in "
+"the directory the `catalog.json` is located, like:"
+msgstr ""
+
+msgid ""
+"~~~\n"
+"(on node2)\n"
+"$ droonga-engine-join --host=node2 \\\n"
+"                      --replica-source-host=node0 \\\n"
+"                      --receiver-host=node2\n"
+"Start to join a new node node2\n"
+"       to the cluster of node0\n"
+"                     via node2 (this host)\""
+msgstr ""
+
+msgid ""
+"Joining new replica to the cluster...\n"
+"...\n"
+"Update existing hosts in the cluster...\n"
+"...\n"
+"Done.\n"
+"~~~"
+msgstr ""
+
+msgid "You can run the command on different node, like:"
+msgstr ""
+
+msgid ""
+"~~~\n"
+"(on node1)\n"
+"$ droonga-engine-join --host=node2 \\\n"
+"                      --replica-source-host=node0 \\\n"
+"                      --receiver-host=node1\n"
+"Start to join a new node node2\n"
+"       to the cluster of node0\n"
+"                     via node1 (this host)\"\n"
+"~~~"
+msgstr ""
+
+msgid ""
+" * You must specify the host name (or the IP address) of the new replica node,"
+" via the `--host` option.\n"
+" * You must specify the host name (or the IP address) of an existing node of t"
+"he cluster, via the `--replica-source-host` option.\n"
+" * You must specify the host name (or the IP address) of the working machine v"
+"ia the `--receiver-host` option."
+msgstr ""
+
+msgid ""
+"Then the command automatically starts to synchronize all data of the cluster t"
+"o the new replica node.\n"
+"After data is successfully synchronized, the node restarts and joins to the cl"
+"uster automatically.\n"
+"All nodes' `catalog.json` are also updated, and now, yes, the new node starts "
+"working as a replica in the cluster."
+msgstr ""
+
+msgid ""
+"You can confirm that they are working as a cluster, via the `system.status` co"
+"mmand:"
+msgstr ""
+
+msgid ""
+"~~~\n"
+"$ curl \"http://node0:10041/droonga/system/status\" | jq \".\"\n"
+"{\n"
+"  \"nodes\": {\n"
+"    \"node0:10031/droonga\": {\n"
+"      \"live\": true\n"
+"    },\n"
+"    \"node1:10031/droonga\": {\n"
+"      \"live\": true\n"
+"    },\n"
+"    \"node2:10031/droonga\": {\n"
+"      \"live\": true\n"
+"    }\n"
+"  }\n"
+"}\n"
+"~~~"
+msgstr ""
+
+msgid ""
+"Because the new node `node2` has become a member of the cluster, `droonga-http"
+"-server` on each node distributes messages to `node2` also automatically."
+msgstr ""
+
+msgid "### Resume inpouring of \"write\" requests"
+msgstr ""
+
+msgid ""
+"OK, it's the time.\n"
+"Because all replica nodes are completely synchronized, the cluster now can pro"
+"cess any request stably.\n"
+"Resume inpouring of requests which can modify the data in the cluster - cronjo"
+"bs, crawlers, buffers, and so on."
+msgstr ""
+
+msgid "With that, a new replica node has joined to your Droonga cluster successfully."
+msgstr ""
+
+msgid "## Remove an existing replica node from an existing cluster"
+msgstr ""
+
+msgid ""
+"A Droonga node can die by various fatal reasons - for example, OOM killer, dis"
+"k-full error, troubles around its hardware, etc.\n"
+"Because nodes in a Droonga cluster observe each other and they stop delivering"
+" messages to dead nodes automatically, the cluster keeps working even if there"
+" are some dead nodes.\n"
+"Then you have to remove dead nodes from the cluster."
+msgstr ""
+
+msgid ""
+"Of course, even if a node is still working, you may plan to remove it to reuse"
+" for another purpose."
+msgstr ""
+
+msgid ""
+"Assume that there is a Droonga cluster constructed with trhee replica nodes `n"
+"ode0`, `node1` and `node2`, and planning to remove the last node `node2` from "
+"the cluster."
+msgstr ""
+
+msgid "### Unjoin an existing replica from the cluster"
+msgstr ""
+
+msgid ""
+"To remove a replica from an existing cluster, you just run the `droonga-engine"
+"-unjoin` command on any existing node in the cluster, like:"
+msgstr ""
+
+msgid ""
+"~~~\n"
+"(on node0)\n"
+"$ droonga-engine-unjoin --host=node2 \\\n"
+"                        --receiver-host=node0\n"
+"Start to unjoin a node node2\n"
+"                    by node0 (this host)"
+msgstr ""
+
+msgid ""
+"Unjoining replica from the cluster...\n"
+"...\n"
+"Done.\n"
+"~~~"
+msgstr ""
+
+msgid ""
+" * You must specify the host name (or the IP address) of an existing node to b"
+"e removed from the cluster, via the `--host` option.\n"
+" * You must specify the host name (or the IP address) of the working machine v"
+"ia the `--receiver-host` option."
+msgstr ""
+
+msgid ""
+"Then the specified node automatically unjoins from the cluster, and all nedes'"
+" `catalog.json` are also updated.\n"
+"Now, the node has been successfully unjoined from the cluster."
+msgstr ""
+
+msgid ""
+"You can confirm that the `node2` is successfully unjoined, via the `system.sta"
+"tus` command:"
+msgstr ""
+
+msgid ""
+"Because the node `node2` is not a member of the cluster anymore, `droonga-http"
+"-server` on `node0` and `node1` never send messages to the `droonga-engine` on"
+" `node2`.\n"
+"On the other hand, because `droonga-http-server` on `node2` is associated only"
+" to the `droonga-engine` on same node, it never sends messages to other nodes."
+msgstr ""
+
+msgid "## Replace an existing replica node in a cluster with a new one"
+msgstr ""
+
+msgid "Replacing of nodes is a combination of those instructions above."
+msgstr ""
+
+msgid ""
+"Assume that there is a Droonga cluster constructed with two replica nodes `nod"
+"e0` and `node1`, the node `node1` is unstable, and planning to replace it with"
+" a new node `node2`."
+msgstr ""
+
+msgid ""
+"First, remove the unstable node.\n"
+"Remove the node from the cluster, like:"
+msgstr ""
+
+msgid ""
+"~~~\n"
+"(on node0)\n"
+"$ droonga-engine-unjoin --host=node1\n"
+"~~~"
+msgstr ""
+
+msgid ""
+"Now the node has been gone.\n"
+"You can confirm that via the `system.status` command:"
+msgstr ""
+
+msgid ""
+"~~~\n"
+"$ curl \"http://node0:10041/droonga/system/status\" | jq \".\"\n"
+"{\n"
+"  \"nodes\": {\n"
+"    \"node0:10031/droonga\": {\n"
+"      \"live\": true\n"
+"    }\n"
+"  }\n"
+"}\n"
+"~~~"
+msgstr ""
+
+msgid "### Add a new replica"
+msgstr ""
+
+msgid ""
+"Next, setup the new replica `node2`.\n"
+"Install required packages, generate the `catalog.json`, and start services."
+msgstr ""
+
+msgid ""
+"If the computer was used as a Droonga node in old days, then you must clear ol"
+"d data instead of installation:"
+msgstr ""
+
+msgid "Then, join the node to the cluster."
+msgstr ""
+
+msgid ""
+"~~~\n"
+"(on node2)\n"
+"$ droonga-engine-join --host=node2 \\\n"
+"                      --replica-source-host=node0\n"
+"~~~"
+msgstr ""
+
+msgid ""
+"Finally a Droonga cluster constructed with two nodes `node0` and `node2` is he"
+"re."
+msgstr ""
+
+msgid "You can confirm that, via the `system.status` command:"
+msgstr ""
+
+msgid ""
+"~~~\n"
+"$ curl \"http://node0:10041/droonga/system/status\" | jq \".\"\n"
+"{\n"
+"  \"nodes\": {\n"
+"    \"node0:10031/droonga\": {\n"
+"      \"live\": true\n"
+"    },\n"
+"    \"node2:10031/droonga\": {\n"
+"      \"live\": true\n"
+"    }\n"
+"  }\n"
+"}\n"
+"$ curl \"http://node2:10041/droonga/system/status\" | jq \".\"\n"
+"{\n"
+"  \"nodes\": {\n"
+"    \"node0:10031/droonga\": {\n"
+"      \"live\": true\n"
+"    },\n"
+"    \"node2:10031/droonga\": {\n"
+"      \"live\": true\n"
+"    }\n"
+"  }\n"
+"}\n"
+"~~~"
+msgstr ""
+
+msgid "## Conclusion"
+msgstr ""
+
+msgid ""
+"In this tutorial, you did add a new replica node to an existing [Droonga][] cl"
+"uster.\n"
+"Moreover, you did remove an existing replica, and did replace a replica with a"
+" new one."
+msgstr ""
+
+msgid ""
+"  [Ubuntu]: http://www.ubuntu.com/\n"
+"  [Droonga]: https://droonga.org/\n"
+"  [Groonga]: http://groonga.org/\n"
+"  [command reference]: ../../reference/commands/"
+msgstr ""

  Added: _po/ja/tutorial/1.1.0/basic/index.po (+1304 -0) 100644
===================================================================
--- /dev/null
+++ _po/ja/tutorial/1.1.0/basic/index.po    2014-11-30 23:20:40 +0900 (896e48e)
@@ -0,0 +1,1304 @@
+msgid ""
+msgstr ""
+"Project-Id-Version: PACKAGE VERSION\n"
+"PO-Revision-Date: 2014-11-30 23:19+0900\n"
+"Language: ja\n"
+"MIME-Version: 1.0\n"
+"Content-Type: text/plain; charset=UTF-8\n"
+"Content-Transfer-Encoding: 8bit\n"
+"Plural-Forms: nplurals=1; plural=0;\n"
+
+msgid ""
+"---\n"
+"title: \"Droonga tutorial: Basic usage of low-layer commands\"\n"
+"layout: en\n"
+"---"
+msgstr ""
+
+msgid ""
+"* TOC\n"
+"{:toc}"
+msgstr ""
+
+msgid "## The goal of this tutorial"
+msgstr ""
+
+msgid ""
+"Learning steps to setup a Droonga based search system by yourself, with low-la"
+"yer commands of Droonga."
+msgstr ""
+
+msgid "## Precondition"
+msgstr ""
+
+msgid ""
+"* You must have basic knowledge and experiences to setup and operate an [Ubunt"
+"u][] or [CentOS][] Server.\n"
+"* You must have basic knowledge and experiences to develop applications based "
+"on the [Ruby][] and the [Node.js][]."
+msgstr ""
+
+msgid "## Abstract"
+msgstr ""
+
+msgid "### What is the Droonga?"
+msgstr ""
+
+msgid ""
+"It is a data processing engine based on a distributed architecture, named afte"
+"r the terms \"distributed-Groonga\"."
+msgstr ""
+
+msgid ""
+"The Droonga is built on some components which are made as separated packages. "
+"You can develop various data processing systems (for example, a fulltext searc"
+"h engine) with high scalability from a distributed architecture, with those pa"
+"ckages."
+msgstr ""
+
+msgid "### Components of the Droonga"
+msgstr ""
+
+msgid "#### Droonga Engine"
+msgstr ""
+
+msgid ""
+"The component \"Droonga Engine\" is the main part to process data with a distrib"
+"uted architecture. It is triggered by requests and processes various data."
+msgstr ""
+
+msgid ""
+"This component is developed and released as the [droonga-engine][].\n"
+"The protocol is compatible to [Fluentd]."
+msgstr ""
+
+msgid ""
+"It internally uses [Groonga][] as its search engine.\n"
+"Groonga is an open source, fulltext search engine, including a column-store fe"
+"ature."
+msgstr ""
+
+msgid "#### Protocol Adapter"
+msgstr ""
+
+msgid ""
+"The component \"Protocol Adapter\" provides ability for clients to communicate w"
+"ith a Droonga engine, using various protocols."
+msgstr ""
+
+msgid ""
+"The only one available protocol of a Droonga engine is the fluentd protocol.\n"
+"Instead, protocol adapters translate it to other common protocols (like HTTP, "
+"Socket.OP, etc.) between the Droonga Engine and clients."
+msgstr ""
+
+msgid ""
+"Currently, there is an implementation for the HTTP: [droonga-http-server][], a"
+" [Node.js][] module package.\n"
+"In other words, the droonga-http-server is one of Droonga Progocol Adapters, a"
+"nd it's a \"Droonga HTTP Protocol Adapter\"."
+msgstr ""
+
+msgid "## Abstract of the system described in this tutorial"
+msgstr ""
+
+msgid "This tutorial describes steps to build a system like following:"
+msgstr ""
+
+msgid ""
+"    +-------------+              +------------------+             +-----------"
+"-----+\n"
+"    | Web Browser |  <-------->  | Protocol Adapter |  <------->  | Droonga En"
+"gine |\n"
+"    +-------------+   HTTP       +------------------+   Fluent    +-----------"
+"-----+\n"
+"                                 w/droonga-http        protocol   w/droonga-en"
+"gine\n"
+"                                           -server"
+msgstr ""
+
+msgid ""
+"                                 \\--------------------------------------------"
+"------/\n"
+"                                       This tutorial describes about this part"
+"."
+msgstr ""
+
+msgid ""
+"User agents (ex. a Web browser) send search requests to a protocol adapter. Th"
+"e adapter receives them, and sends internal (translated) search requests to a "
+"Droonga engine. The engine processes them actually. Search results are sent fr"
+"om the engine to the protocol adapter, and finally delivered to the user agent"
+"s."
+msgstr ""
+
+msgid ""
+"For example, let's try to build a database system to find [Starbucks stores in"
+" New York](http://geocommons.com/overlays/430038)."
+msgstr ""
+
+msgid "## Prepare an environment for experiments"
+msgstr ""
+
+msgid ""
+"Prepare a computer at first. This tutorial describes steps to develop a search"
+" service based on the Droonga, on an existing computer.\n"
+"Following instructions are basically written for a successfully prepared virtu"
+"al machine of the `Ubuntu 14.04 x64`, `CentOS 7 x64`, or or `CentOS 6.5 x64` o"
+"n the service [DigitalOcean](https://www.digitalocean.com/), with an available"
+" console."
+msgstr ""
+
+msgid ""
+"NOTE: Make sure to use instances with >= 2GB memory equipped, at least during "
+"installation of required packages for Droonga. Otherwise, you possibly experie"
+"nce a strange build error."
+msgstr ""
+
+msgid "Assume that the host is `192.168.100.50`."
+msgstr ""
+
+msgid "## Install Droonga engine"
+msgstr ""
+
+msgid ""
+"The part \"Droonga engine\" stores the database and provides the search feature "
+"actually.\n"
+"In this section we install a droonga-engine and load searchable data to the da"
+"tabase."
+msgstr ""
+
+msgid "### Install `droonga-engine`"
+msgstr ""
+
+msgid "Download the installation script and run it by `bash` as the root user:"
+msgstr ""
+
+msgid ""
+"~~~\n"
+"# curl https://raw.githubusercontent.com/droonga/droonga-engine/master/install"
+".sh | \\\n"
+"    bash\n"
+"...\n"
+"Installing droonga-engine from RubyGems...\n"
+"...\n"
+"Preparing the user...\n"
+"...\n"
+"Setting up the configuration directory...\n"
+"This node is configured with a hostname XXXXXXXX."
+msgstr ""
+
+msgid ""
+"Registering droonga-engine as a service...\n"
+"...\n"
+"Successfully installed droonga-engine.\n"
+"~~~"
+msgstr ""
+
+msgid "### Prepare configuration files to start `droonga-engine`"
+msgstr ""
+
+msgid ""
+"All configuration files and physical databases are placed under a `droonga` di"
+"rectory in the home directory of the service user `droonga-engine`:"
+msgstr ""
+
+msgid "    $ cd ~droonga-engine/droonga"
+msgstr ""
+
+msgid ""
+"Then, put (overwrite) a configuration file `catalog.json` like following, into"
+" the directory:"
+msgstr ""
+
+msgid "catalog.json:"
+msgstr ""
+
+msgid ""
+"    {\n"
+"      \"version\": 2,\n"
+"      \"effectiveDate\": \"2013-09-01T00:00:00Z\",\n"
+"      \"datasets\": {\n"
+"        \"Default\": {\n"
+"          \"nWorkers\": 4,\n"
+"          \"plugins\": [\"groonga\", \"crud\", \"search\", \"dump\", \"status\"],\n"
+"          \"schema\": {\n"
+"            \"Store\": {\n"
+"              \"type\": \"Hash\",\n"
+"              \"keyType\": \"ShortText\",\n"
+"              \"columns\": {\n"
+"                \"location\": {\n"
+"                  \"type\": \"Scalar\",\n"
+"                  \"valueType\": \"WGS84GeoPoint\"\n"
+"                }\n"
+"              }\n"
+"            },\n"
+"            \"Location\": {\n"
+"              \"type\": \"PatriciaTrie\",\n"
+"              \"keyType\": \"WGS84GeoPoint\",\n"
+"              \"columns\": {\n"
+"                \"store\": {\n"
+"                  \"type\": \"Index\",\n"
+"                  \"valueType\": \"Store\",\n"
+"                  \"indexOptions\": {\n"
+"                    \"sources\": [\"location\"]\n"
+"                  }\n"
+"                }\n"
+"              }\n"
+"            },\n"
+"            \"Term\": {\n"
+"              \"type\": \"PatriciaTrie\",\n"
+"              \"keyType\": \"ShortText\",\n"
+"              \"normalizer\": \"NormalizerAuto\",\n"
+"              \"tokenizer\": \"TokenBigram\",\n"
+"              \"columns\": {\n"
+"                \"stores__key\": {\n"
+"                  \"type\": \"Index\",\n"
+"                  \"valueType\": \"Store\",\n"
+"                  \"indexOptions\": {\n"
+"                    \"position\": true,\n"
+"                    \"sources\": [\"_key\"]\n"
+"                  }\n"
+"                }\n"
+"              }\n"
+"            }\n"
+"          },\n"
+"          \"replicas\": [\n"
+"            {\n"
+"              \"dimension\": \"_key\",\n"
+"              \"slicer\": \"hash\",\n"
+"              \"slices\": [\n"
+"                {\n"
+"                  \"volume\": {\n"
+"                    \"address\": \"192.168.100.50:10031/droonga.000\"\n"
+"                  }\n"
+"                },\n"
+"                {\n"
+"                  \"volume\": {\n"
+"                    \"address\": \"192.168.100.50:10031/droonga.001\"\n"
+"                  }\n"
+"                },\n"
+"                {\n"
+"                  \"volume\": {\n"
+"                    \"address\": \"192.168.100.50:10031/droonga.002\"\n"
+"                  }\n"
+"                }\n"
+"              ]\n"
+"            },\n"
+"            {\n"
+"              \"dimension\": \"_key\",\n"
+"              \"slicer\": \"hash\",\n"
+"              \"slices\": [\n"
+"                {\n"
+"                  \"volume\": {\n"
+"                    \"address\": \"192.168.100.50:10031/droonga.010\"\n"
+"                  }\n"
+"                },\n"
+"                {\n"
+"                  \"volume\": {\n"
+"                    \"address\": \"192.168.100.50:10031/droonga.011\"\n"
+"                  }\n"
+"                },\n"
+"                {\n"
+"                  \"volume\": {\n"
+"                    \"address\": \"192.168.100.50:10031/droonga.012\"\n"
+"                  }\n"
+"                }\n"
+"              ]\n"
+"            }\n"
+"          ]\n"
+"        }\n"
+"      }\n"
+"    }"
+msgstr ""
+
+msgid "This `catalog.json` defines a dataset `Default` as:"
+msgstr ""
+
+msgid ""
+" * At the top level, there is one volume based on two sub volumes, called \"rep"
+"licas\".\n"
+" * At the next lower level, one replica volume is based on three sub volumes, "
+"called \"slices\".\n"
+"   They are minimum elements constructing a Droonga's dataset."
+msgstr ""
+
+msgid ""
+"These six atomic volumes having `\"address\"` information are internally called "
+"as *single volume*s.\n"
+"The `\"address\"` indicates the location of the corresponding physical storage w"
+"hich is a database for Groonga, they are managed by `droonga-engine` instances"
+" automatically."
+msgstr ""
+
+msgid ""
+"For more details of the configuration file `catalog.json`, see [the reference "
+"manual of catalog.json](/reference/catalog)."
+msgstr ""
+
+msgid "### Start and stop the `droonga-engine` service"
+msgstr ""
+
+msgid "The `droonga-engine` service can be started via the `service` command:"
+msgstr ""
+
+msgid ""
+"~~~\n"
+"# service droonga-engine start\n"
+"~~~"
+msgstr ""
+
+msgid "To stop it, you also have to use the `service` command:"
+msgstr ""
+
+msgid ""
+"~~~\n"
+"# service droonga-engine stop\n"
+"~~~"
+msgstr ""
+
+msgid "After confirmation, start the `droonga-engine` again."
+msgstr ""
+
+msgid "### Create a database"
+msgstr ""
+
+msgid ""
+"After a Droonga engine is started, let's load data.\n"
+"Prepare `stores.jsons` including location data of stores."
+msgstr ""
+
+msgid "stores.jsons:"
+msgstr ""
+
+msgid ""
+"~~~\n"
+"{\n"
+"  \"dataset\": \"Default\",\n"
+"  \"type\": \"add\",\n"
+"  \"body\": {\n"
+"    \"table\": \"Store\",\n"
+"    \"key\": \"1st Avenue & 75th St. - New York NY  (W)\",\n"
+"    \"values\": {\n"
+"      \"location\": \"40.770262,-73.954798\"\n"
+"    }\n"
+"  }\n"
+"}\n"
+"{\n"
+"  \"dataset\": \"Default\",\n"
+"  \"type\": \"add\",\n"
+"  \"body\": {\n"
+"    \"table\": \"Store\",\n"
+"    \"key\": \"76th & Second - New York NY  (W)\",\n"
+"    \"values\": {\n"
+"      \"location\": \"40.771056,-73.956757\"\n"
+"    }\n"
+"  }\n"
+"}\n"
+"{\n"
+"  \"dataset\": \"Default\",\n"
+"  \"type\": \"add\",\n"
+"  \"body\": {\n"
+"    \"table\": \"Store\",\n"
+"    \"key\": \"2nd Ave. & 9th Street - New York NY\",\n"
+"    \"values\": {\n"
+"      \"location\": \"40.729445,-73.987471\"\n"
+"    }\n"
+"  }\n"
+"}\n"
+"{\n"
+"  \"dataset\": \"Default\",\n"
+"  \"type\": \"add\",\n"
+"  \"body\": {\n"
+"    \"table\": \"Store\",\n"
+"    \"key\": \"15th & Third - New York NY  (W)\",\n"
+"    \"values\": {\n"
+"      \"location\": \"40.733946,-73.9867\"\n"
+"    }\n"
+"  }\n"
+"}\n"
+"{\n"
+"  \"dataset\": \"Default\",\n"
+"  \"type\": \"add\",\n"
+"  \"body\": {\n"
+"    \"table\": \"Store\",\n"
+"    \"key\": \"41st and Broadway - New York NY  (W)\",\n"
+"    \"values\": {\n"
+"      \"location\": \"40.755111,-73.986225\"\n"
+"    }\n"
+"  }\n"
+"}\n"
+"{\n"
+"  \"dataset\": \"Default\",\n"
+"  \"type\": \"add\",\n"
+"  \"body\": {\n"
+"    \"table\": \"Store\",\n"
+"    \"key\": \"84th & Third Ave - New York NY  (W)\",\n"
+"    \"values\": {\n"
+"      \"location\": \"40.777485,-73.954979\"\n"
+"    }\n"
+"  }\n"
+"}\n"
+"{\n"
+"  \"dataset\": \"Default\",\n"
+"  \"type\": \"add\",\n"
+"  \"body\": {\n"
+"    \"table\": \"Store\",\n"
+"    \"key\": \"150 E. 42nd Street - New York NY  (W)\",\n"
+"    \"values\": {\n"
+"      \"location\": \"40.750784,-73.975582\"\n"
+"    }\n"
+"  }\n"
+"}\n"
+"{\n"
+"  \"dataset\": \"Default\",\n"
+"  \"type\": \"add\",\n"
+"  \"body\": {\n"
+"    \"table\": \"Store\",\n"
+"    \"key\": \"West 43rd and Broadway - New York NY  (W)\",\n"
+"    \"values\": {\n"
+"      \"location\": \"40.756197,-73.985624\"\n"
+"    }\n"
+"  }\n"
+"}\n"
+"{\n"
+"  \"dataset\": \"Default\",\n"
+"  \"type\": \"add\",\n"
+"  \"body\": {\n"
+"    \"table\": \"Store\",\n"
+"    \"key\": \"Macy's 35th Street Balcony - New York NY\",\n"
+"    \"values\": {\n"
+"      \"location\": \"40.750703,-73.989787\"\n"
+"    }\n"
+"  }\n"
+"}\n"
+"{\n"
+"  \"dataset\": \"Default\",\n"
+"  \"type\": \"add\",\n"
+"  \"body\": {\n"
+"    \"table\": \"Store\",\n"
+"    \"key\": \"Macy's 6th Floor - Herald Square - New York NY  (W)\",\n"
+"    \"values\": {\n"
+"      \"location\": \"40.750703,-73.989787\"\n"
+"    }\n"
+"  }\n"
+"}\n"
+"{\n"
+"  \"dataset\": \"Default\",\n"
+"  \"type\": \"add\",\n"
+"  \"body\": {\n"
+"    \"table\": \"Store\",\n"
+"    \"key\": \"Herald Square- Macy's - New York NY\",\n"
+"    \"values\": {\n"
+"      \"location\": \"40.750703,-73.989787\"\n"
+"    }\n"
+"  }\n"
+"}\n"
+"{\n"
+"  \"dataset\": \"Default\",\n"
+"  \"type\": \"add\",\n"
+"  \"body\": {\n"
+"    \"table\": \"Store\",\n"
+"    \"key\": \"Macy's 5th Floor - Herald Square - New York NY  (W)\",\n"
+"    \"values\": {\n"
+"      \"location\": \"40.750703,-73.989787\"\n"
+"    }\n"
+"  }\n"
+"}\n"
+"{\n"
+"  \"dataset\": \"Default\",\n"
+"  \"type\": \"add\",\n"
+"  \"body\": {\n"
+"    \"table\": \"Store\",\n"
+"    \"key\": \"80th & York - New York NY  (W)\",\n"
+"    \"values\": {\n"
+"      \"location\": \"40.772204,-73.949862\"\n"
+"    }\n"
+"  }\n"
+"}\n"
+"{\n"
+"  \"dataset\": \"Default\",\n"
+"  \"type\": \"add\",\n"
+"  \"body\": {\n"
+"    \"table\": \"Store\",\n"
+"    \"key\": \"Columbus @ 67th - New York NY  (W)\",\n"
+"    \"values\": {\n"
+"      \"location\": \"40.774009,-73.981472\"\n"
+"    }\n"
+"  }\n"
+"}\n"
+"{\n"
+"  \"dataset\": \"Default\",\n"
+"  \"type\": \"add\",\n"
+"  \"body\": {\n"
+"    \"table\": \"Store\",\n"
+"    \"key\": \"45th & Broadway - New York NY  (W)\",\n"
+"    \"values\": {\n"
+"      \"location\": \"40.75766,-73.985719\"\n"
+"    }\n"
+"  }\n"
+"}\n"
+"{\n"
+"  \"dataset\": \"Default\",\n"
+"  \"type\": \"add\",\n"
+"  \"body\": {\n"
+"    \"table\": \"Store\",\n"
+"    \"key\": \"Marriott Marquis - Lobby - New York NY\",\n"
+"    \"values\": {\n"
+"      \"location\": \"40.759123,-73.984927\"\n"
+"    }\n"
+"  }\n"
+"}\n"
+"{\n"
+"  \"dataset\": \"Default\",\n"
+"  \"type\": \"add\",\n"
+"  \"body\": {\n"
+"    \"table\": \"Store\",\n"
+"    \"key\": \"Second @ 81st - New York NY  (W)\",\n"
+"    \"values\": {\n"
+"      \"location\": \"40.77466,-73.954447\"\n"
+"    }\n"
+"  }\n"
+"}\n"
+"{\n"
+"  \"dataset\": \"Default\",\n"
+"  \"type\": \"add\",\n"
+"  \"body\": {\n"
+"    \"table\": \"Store\",\n"
+"    \"key\": \"52nd & Seventh - New York NY  (W)\",\n"
+"    \"values\": {\n"
+"      \"location\": \"40.761829,-73.981141\"\n"
+"    }\n"
+"  }\n"
+"}\n"
+"{\n"
+"  \"dataset\": \"Default\",\n"
+"  \"type\": \"add\",\n"
+"  \"body\": {\n"
+"    \"table\": \"Store\",\n"
+"    \"key\": \"1585 Broadway (47th) - New York NY  (W)\",\n"
+"    \"values\": {\n"
+"      \"location\": \"40.759806,-73.985066\"\n"
+"    }\n"
+"  }\n"
+"}\n"
+"{\n"
+"  \"dataset\": \"Default\",\n"
+"  \"type\": \"add\",\n"
+"  \"body\": {\n"
+"    \"table\": \"Store\",\n"
+"    \"key\": \"85th & First - New York NY  (W)\",\n"
+"    \"values\": {\n"
+"      \"location\": \"40.776101,-73.949971\"\n"
+"    }\n"
+"  }\n"
+"}\n"
+"{\n"
+"  \"dataset\": \"Default\",\n"
+"  \"type\": \"add\",\n"
+"  \"body\": {\n"
+"    \"table\": \"Store\",\n"
+"    \"key\": \"92nd & 3rd - New York NY  (W)\",\n"
+"    \"values\": {\n"
+"      \"location\": \"40.782606,-73.951235\"\n"
+"    }\n"
+"  }\n"
+"}\n"
+"{\n"
+"  \"dataset\": \"Default\",\n"
+"  \"type\": \"add\",\n"
+"  \"body\": {\n"
+"    \"table\": \"Store\",\n"
+"    \"key\": \"165 Broadway - 1 Liberty - New York NY  (W)\",\n"
+"    \"values\": {\n"
+"      \"location\": \"40.709727,-74.011395\"\n"
+"    }\n"
+"  }\n"
+"}\n"
+"{\n"
+"  \"dataset\": \"Default\",\n"
+"  \"type\": \"add\",\n"
+"  \"body\": {\n"
+"    \"table\": \"Store\",\n"
+"    \"key\": \"1656 Broadway - New York NY  (W)\",\n"
+"    \"values\": {\n"
+"      \"location\": \"40.762434,-73.983364\"\n"
+"    }\n"
+"  }\n"
+"}\n"
+"{\n"
+"  \"dataset\": \"Default\",\n"
+"  \"type\": \"add\",\n"
+"  \"body\": {\n"
+"    \"table\": \"Store\",\n"
+"    \"key\": \"54th & Broadway - New York NY  (W)\",\n"
+"    \"values\": {\n"
+"      \"location\": \"40.764275,-73.982361\"\n"
+"    }\n"
+"  }\n"
+"}\n"
+"{\n"
+"  \"dataset\": \"Default\",\n"
+"  \"type\": \"add\",\n"
+"  \"body\": {\n"
+"    \"table\": \"Store\",\n"
+"    \"key\": \"Limited Brands-NYC - New York NY\",\n"
+"    \"values\": {\n"
+"      \"location\": \"40.765219,-73.982025\"\n"
+"    }\n"
+"  }\n"
+"}\n"
+"{\n"
+"  \"dataset\": \"Default\",\n"
+"  \"type\": \"add\",\n"
+"  \"body\": {\n"
+"    \"table\": \"Store\",\n"
+"    \"key\": \"19th & 8th - New York NY  (W)\",\n"
+"    \"values\": {\n"
+"      \"location\": \"40.743218,-74.000605\"\n"
+"    }\n"
+"  }\n"
+"}\n"
+"{\n"
+"  \"dataset\": \"Default\",\n"
+"  \"type\": \"add\",\n"
+"  \"body\": {\n"
+"    \"table\": \"Store\",\n"
+"    \"key\": \"60th & Broadway-II - New York NY  (W)\",\n"
+"    \"values\": {\n"
+"      \"location\": \"40.769196,-73.982576\"\n"
+"    }\n"
+"  }\n"
+"}\n"
+"{\n"
+"  \"dataset\": \"Default\",\n"
+"  \"type\": \"add\",\n"
+"  \"body\": {\n"
+"    \"table\": \"Store\",\n"
+"    \"key\": \"63rd & Broadway - New York NY  (W)\",\n"
+"    \"values\": {\n"
+"      \"location\": \"40.771376,-73.982709\"\n"
+"    }\n"
+"  }\n"
+"}\n"
+"{\n"
+"  \"dataset\": \"Default\",\n"
+"  \"type\": \"add\",\n"
+"  \"body\": {\n"
+"    \"table\": \"Store\",\n"
+"    \"key\": \"195 Broadway - New York NY  (W)\",\n"
+"    \"values\": {\n"
+"      \"location\": \"40.710703,-74.009485\"\n"
+"    }\n"
+"  }\n"
+"}\n"
+"{\n"
+"  \"dataset\": \"Default\",\n"
+"  \"type\": \"add\",\n"
+"  \"body\": {\n"
+"    \"table\": \"Store\",\n"
+"    \"key\": \"2 Broadway - New York NY  (W)\",\n"
+"    \"values\": {\n"
+"      \"location\": \"40.704538,-74.01324\"\n"
+"    }\n"
+"  }\n"
+"}\n"
+"{\n"
+"  \"dataset\": \"Default\",\n"
+"  \"type\": \"add\",\n"
+"  \"body\": {\n"
+"    \"table\": \"Store\",\n"
+"    \"key\": \"2 Columbus Ave. - New York NY  (W)\",\n"
+"    \"values\": {\n"
+"      \"location\": \"40.769262,-73.984764\"\n"
+"    }\n"
+"  }\n"
+"}\n"
+"{\n"
+"  \"dataset\": \"Default\",\n"
+"  \"type\": \"add\",\n"
+"  \"body\": {\n"
+"    \"table\": \"Store\",\n"
+"    \"key\": \"NY Plaza - New York NY  (W)\",\n"
+"    \"values\": {\n"
+"      \"location\": \"40.702802,-74.012784\"\n"
+"    }\n"
+"  }\n"
+"}\n"
+"{\n"
+"  \"dataset\": \"Default\",\n"
+"  \"type\": \"add\",\n"
+"  \"body\": {\n"
+"    \"table\": \"Store\",\n"
+"    \"key\": \"36th and Madison - New York NY  (W)\",\n"
+"    \"values\": {\n"
+"      \"location\": \"40.748917,-73.982683\"\n"
+"    }\n"
+"  }\n"
+"}\n"
+"{\n"
+"  \"dataset\": \"Default\",\n"
+"  \"type\": \"add\",\n"
+"  \"body\": {\n"
+"    \"table\": \"Store\",\n"
+"    \"key\": \"125th St. btwn Adam Clayton & FDB - New York NY\",\n"
+"    \"values\": {\n"
+"      \"location\": \"40.808952,-73.948229\"\n"
+"    }\n"
+"  }\n"
+"}\n"
+"{\n"
+"  \"dataset\": \"Default\",\n"
+"  \"type\": \"add\",\n"
+"  \"body\": {\n"
+"    \"table\": \"Store\",\n"
+"    \"key\": \"70th & Broadway - New York NY  (W)\",\n"
+"    \"values\": {\n"
+"      \"location\": \"40.777463,-73.982237\"\n"
+"    }\n"
+"  }\n"
+"}\n"
+"{\n"
+"  \"dataset\": \"Default\",\n"
+"  \"type\": \"add\",\n"
+"  \"body\": {\n"
+"    \"table\": \"Store\",\n"
+"    \"key\": \"2138 Broadway - New York NY  (W)\",\n"
+"    \"values\": {\n"
+"      \"location\": \"40.781078,-73.981167\"\n"
+"    }\n"
+"  }\n"
+"}\n"
+"{\n"
+"  \"dataset\": \"Default\",\n"
+"  \"type\": \"add\",\n"
+"  \"body\": {\n"
+"    \"table\": \"Store\",\n"
+"    \"key\": \"118th & Frederick Douglas Blvd. - New York NY  (W)\",\n"
+"    \"values\": {\n"
+"      \"location\": \"40.806176,-73.954109\"\n"
+"    }\n"
+"  }\n"
+"}\n"
+"{\n"
+"  \"dataset\": \"Default\",\n"
+"  \"type\": \"add\",\n"
+"  \"body\": {\n"
+"    \"table\": \"Store\",\n"
+"    \"key\": \"42nd & Second - New York NY  (W)\",\n"
+"    \"values\": {\n"
+"      \"location\": \"40.750069,-73.973393\"\n"
+"    }\n"
+"  }\n"
+"}\n"
+"{\n"
+"  \"dataset\": \"Default\",\n"
+"  \"type\": \"add\",\n"
+"  \"body\": {\n"
+"    \"table\": \"Store\",\n"
+"    \"key\": \"Broadway @ 81st - New York NY  (W)\",\n"
+"    \"values\": {\n"
+"      \"location\": \"40.784972,-73.978987\"\n"
+"    }\n"
+"  }\n"
+"}\n"
+"{\n"
+"  \"dataset\": \"Default\",\n"
+"  \"type\": \"add\",\n"
+"  \"body\": {\n"
+"    \"table\": \"Store\",\n"
+"    \"key\": \"Fashion Inst of Technology - New York NY\",\n"
+"    \"values\": {\n"
+"      \"location\": \"40.746948,-73.994557\"\n"
+"    }\n"
+"  }\n"
+"}\n"
+"~~~"
+msgstr ""
+
+msgid "Open another terminal and send the json to the Droonga engine."
+msgstr ""
+
+msgid "Send `stores.jsons` as follows:"
+msgstr ""
+
+msgid ""
+"~~~\n"
+"$ droonga-request stores.jsons\n"
+"Elapsed time: 0.01101195\n"
+"[\n"
+"  \"droonga.message\",\n"
+"  1393562553,\n"
+"  {\n"
+"    \"inReplyTo\": \"1393562553.8918273\",\n"
+"    \"statusCode\": 200,\n"
+"    \"type\": \"add.result\",\n"
+"    \"body\": true\n"
+"  }\n"
+"]\n"
+"...\n"
+"Elapsed time: 0.004817463\n"
+"[\n"
+"  \"droonga.message\",\n"
+"  1393562554,\n"
+"  {\n"
+"    \"inReplyTo\": \"1393562554.2447524\",\n"
+"    \"statusCode\": 200,\n"
+"    \"type\": \"add.result\",\n"
+"    \"body\": true\n"
+"  }\n"
+"]\n"
+"~~~"
+msgstr ""
+
+msgid "Now a Droonga engine for searching Starbucks stores database is ready."
+msgstr ""
+
+msgid "### Send request with droonga-request"
+msgstr ""
+
+msgid "Check if it is working. Create a query as a JSON file as follows."
+msgstr ""
+
+msgid "search-all-stores.json:"
+msgstr ""
+
+msgid ""
+"~~~\n"
+"{\n"
+"  \"dataset\": \"Default\",\n"
+"  \"type\": \"search\",\n"
+"  \"body\": {\n"
+"    \"queries\": {\n"
+"      \"stores\": {\n"
+"        \"source\": \"Store\",\n"
+"        \"output\": {\n"
+"          \"elements\": [\n"
+"            \"startTime\",\n"
+"            \"elapsedTime\",\n"
+"            \"count\",\n"
+"            \"attributes\",\n"
+"            \"records\"\n"
+"          ],\n"
+"          \"attributes\": [\"_key\"],\n"
+"          \"limit\": -1\n"
+"        }\n"
+"      }\n"
+"    }\n"
+"  }\n"
+"}\n"
+"~~~"
+msgstr ""
+
+msgid "Send the request to the Droonga Engine:"
+msgstr ""
+
+msgid ""
+"~~~\n"
+"$ droonga-request search-all-stores.json\n"
+"Elapsed time: 0.008286785\n"
+"[\n"
+"  \"droonga.message\",\n"
+"  1393562604,\n"
+"  {\n"
+"    \"inReplyTo\": \"1393562604.4970381\",\n"
+"    \"statusCode\": 200,\n"
+"    \"type\": \"search.result\",\n"
+"    \"body\": {\n"
+"      \"stores\": {\n"
+"        \"count\": 40,\n"
+"        \"records\": [\n"
+"          [\n"
+"            \"15th & Third - New York NY  (W)\"\n"
+"          ],\n"
+"          [\n"
+"            \"41st and Broadway - New York NY  (W)\"\n"
+"          ],\n"
+"          [\n"
+"            \"84th & Third Ave - New York NY  (W)\"\n"
+"          ],\n"
+"          [\n"
+"            \"Macy's 35th Street Balcony - New York NY\"\n"
+"          ],\n"
+"          [\n"
+"            \"Second @ 81st - New York NY  (W)\"\n"
+"          ],\n"
+"          [\n"
+"            \"52nd & Seventh - New York NY  (W)\"\n"
+"          ],\n"
+"          [\n"
+"            \"1585 Broadway (47th) - New York NY  (W)\"\n"
+"          ],\n"
+"          [\n"
+"            \"54th & Broadway - New York NY  (W)\"\n"
+"          ],\n"
+"          [\n"
+"            \"60th & Broadway-II - New York NY  (W)\"\n"
+"          ],\n"
+"          [\n"
+"            \"63rd & Broadway - New York NY  (W)\"\n"
+"          ],\n"
+"          [\n"
+"            \"2 Columbus Ave. - New York NY  (W)\"\n"
+"          ],\n"
+"          [\n"
+"            \"NY Plaza - New York NY  (W)\"\n"
+"          ],\n"
+"          [\n"
+"            \"2138 Broadway - New York NY  (W)\"\n"
+"          ],\n"
+"          [\n"
+"            \"Broadway @ 81st - New York NY  (W)\"\n"
+"          ],\n"
+"          [\n"
+"            \"76th & Second - New York NY  (W)\"\n"
+"          ],\n"
+"          [\n"
+"            \"2nd Ave. & 9th Street - New York NY\"\n"
+"          ],\n"
+"          [\n"
+"            \"150 E. 42nd Street - New York NY  (W)\"\n"
+"          ],\n"
+"          [\n"
+"            \"Macy's 6th Floor - Herald Square - New York NY  (W)\"\n"
+"          ],\n"
+"          [\n"
+"            \"Herald Square- Macy's - New York NY\"\n"
+"          ],\n"
+"          [\n"
+"            \"Macy's 5th Floor - Herald Square - New York NY  (W)\"\n"
+"          ],\n"
+"          [\n"
+"            \"Marriott Marquis - Lobby - New York NY\"\n"
+"          ],\n"
+"          [\n"
+"            \"85th & First - New York NY  (W)\"\n"
+"          ],\n"
+"          [\n"
+"            \"1656 Broadway - New York NY  (W)\"\n"
+"          ],\n"
+"          [\n"
+"            \"Limited Brands-NYC - New York NY\"\n"
+"          ],\n"
+"          [\n"
+"            \"2 Broadway - New York NY  (W)\"\n"
+"          ],\n"
+"          [\n"
+"            \"36th and Madison - New York NY  (W)\"\n"
+"          ],\n"
+"          [\n"
+"            \"125th St. btwn Adam Clayton & FDB - New York NY\"\n"
+"          ],\n"
+"          [\n"
+"            \"118th & Frederick Douglas Blvd. - New York NY  (W)\"\n"
+"          ],\n"
+"          [\n"
+"            \"Fashion Inst of Technology - New York NY\"\n"
+"          ],\n"
+"          [\n"
+"            \"1st Avenue & 75th St. - New York NY  (W)\"\n"
+"          ],\n"
+"          [\n"
+"            \"West 43rd and Broadway - New York NY  (W)\"\n"
+"          ],\n"
+"          [\n"
+"            \"80th & York - New York NY  (W)\"\n"
+"          ],\n"
+"          [\n"
+"            \"Columbus @ 67th - New York NY  (W)\"\n"
+"          ],\n"
+"          [\n"
+"            \"45th & Broadway - New York NY  (W)\"\n"
+"          ],\n"
+"          [\n"
+"            \"92nd & 3rd - New York NY  (W)\"\n"
+"          ],\n"
+"          [\n"
+"            \"165 Broadway - 1 Liberty - New York NY  (W)\"\n"
+"          ],\n"
+"          [\n"
+"            \"19th & 8th - New York NY  (W)\"\n"
+"          ],\n"
+"          [\n"
+"            \"195 Broadway - New York NY  (W)\"\n"
+"          ],\n"
+"          [\n"
+"            \"70th & Broadway - New York NY  (W)\"\n"
+"          ],\n"
+"          [\n"
+"            \"42nd & Second - New York NY  (W)\"\n"
+"          ]\n"
+"        ]\n"
+"      }\n"
+"    }\n"
+"  }\n"
+"]\n"
+"~~~"
+msgstr ""
+
+msgid ""
+"Now the store names are retrieved. The engine looks working correctly.\n"
+"Next, setup a protocol adapter for clients to accept search requests via HTTP."
+msgstr ""
+
+msgid "## Setup an HTTP Protocol Adapter"
+msgstr ""
+
+msgid "Let's use the `droonga-http-server` as an HTTP protocol adapter."
+msgstr ""
+
+msgid "### Install the droonga-http-server"
+msgstr ""
+
+msgid ""
+"~~~\n"
+"# curl https://raw.githubusercontent.com/droonga/droonga-http-server/master/in"
+"stall.sh | \\\n"
+"    bash\n"
+"...\n"
+"Installing droonga-http-server from npmjs.org...\n"
+"...\n"
+"Preparing the user...\n"
+"...\n"
+"Setting up the configuration directory...\n"
+"The droonga-engine service is detected on this node.\n"
+"The droonga-http-server is configured to be connected\n"
+"to this node (XXXXXXXX).\n"
+"This node is configured with a hostname XXXXXXXX."
+msgstr ""
+
+msgid ""
+"Registering droonga-http-server as a service...\n"
+"...\n"
+"Successfully installed droonga-http-server.\n"
+"~~~"
+msgstr ""
+
+msgid "### Start and stop the `droonga-http-server` service"
+msgstr ""
+
+msgid "The `droonga-http-server` service can be started via the `service` command:"
+msgstr ""
+
+msgid ""
+"~~~\n"
+"# service droonga-http-server start\n"
+"~~~"
+msgstr ""
+
+msgid ""
+"~~~\n"
+"# service droonga-http-server stop\n"
+"~~~"
+msgstr ""
+
+msgid "After confirmation, start the `droonga-http-server` again."
+msgstr ""
+
+msgid "### Search request via HTTP"
+msgstr ""
+
+msgid ""
+"We're all set. Let's send a search request to the protocol adapter via HTTP. A"
+"t first, try to get all records of the `Stores` table by a request like follow"
+"ing. (Note: The `attributes=_key` parameter means \"export the value of the col"
+"umn `_key` to the search result\". If you don't set the parameter, each record "
+"returned in the `records` will become just a blank array. You can specify mult"
+"iple column names by the delimiter `,`. For example `attributes=_key,location`"
+" will return both the primary key and the location for each record.)"
+msgstr ""
+
+msgid ""
+"    $ curl \"http://192.168.100.50:10041/tables/Store?attributes=_key&limit=-1\""
+"\n"
+"    {\n"
+"      \"stores\": {\n"
+"        \"count\": 40,\n"
+"        \"records\": [\n"
+"          [\n"
+"            \"15th & Third - New York NY  (W)\"\n"
+"          ],\n"
+"          [\n"
+"            \"41st and Broadway - New York NY  (W)\"\n"
+"          ],\n"
+"          [\n"
+"            \"84th & Third Ave - New York NY  (W)\"\n"
+"          ],\n"
+"          [\n"
+"            \"Macy's 35th Street Balcony - New York NY\"\n"
+"          ],\n"
+"          [\n"
+"            \"Second @ 81st - New York NY  (W)\"\n"
+"          ],\n"
+"          [\n"
+"            \"52nd & Seventh - New York NY  (W)\"\n"
+"          ],\n"
+"          [\n"
+"            \"1585 Broadway (47th) - New York NY  (W)\"\n"
+"          ],\n"
+"          [\n"
+"            \"54th & Broadway - New York NY  (W)\"\n"
+"          ],\n"
+"          [\n"
+"            \"60th & Broadway-II - New York NY  (W)\"\n"
+"          ],\n"
+"          [\n"
+"            \"63rd & Broadway - New York NY  (W)\"\n"
+"          ],\n"
+"          [\n"
+"            \"2 Columbus Ave. - New York NY  (W)\"\n"
+"          ],\n"
+"          [\n"
+"            \"NY Plaza - New York NY  (W)\"\n"
+"          ],\n"
+"          [\n"
+"            \"2138 Broadway - New York NY  (W)\"\n"
+"          ],\n"
+"          [\n"
+"            \"Broadway @ 81st - New York NY  (W)\"\n"
+"          ],\n"
+"          [\n"
+"            \"76th & Second - New York NY  (W)\"\n"
+"          ],\n"
+"          [\n"
+"            \"2nd Ave. & 9th Street - New York NY\"\n"
+"          ],\n"
+"          [\n"
+"            \"150 E. 42nd Street - New York NY  (W)\"\n"
+"          ],\n"
+"          [\n"
+"            \"Macy's 6th Floor - Herald Square - New York NY  (W)\"\n"
+"          ],\n"
+"          [\n"
+"            \"Herald Square- Macy's - New York NY\"\n"
+"          ],\n"
+"          [\n"
+"            \"Macy's 5th Floor - Herald Square - New York NY  (W)\"\n"
+"          ],\n"
+"          [\n"
+"            \"Marriott Marquis - Lobby - New York NY\"\n"
+"          ],\n"
+"          [\n"
+"            \"85th & First - New York NY  (W)\"\n"
+"          ],\n"
+"          [\n"
+"            \"1656 Broadway - New York NY  (W)\"\n"
+"          ],\n"
+"          [\n"
+"            \"Limited Brands-NYC - New York NY\"\n"
+"          ],\n"
+"          [\n"
+"            \"2 Broadway - New York NY  (W)\"\n"
+"          ],\n"
+"          [\n"
+"            \"36th and Madison - New York NY  (W)\"\n"
+"          ],\n"
+"          [\n"
+"            \"125th St. btwn Adam Clayton & FDB - New York NY\"\n"
+"          ],\n"
+"          [\n"
+"            \"118th & Frederick Douglas Blvd. - New York NY  (W)\"\n"
+"          ],\n"
+"          [\n"
+"            \"Fashion Inst of Technology - New York NY\"\n"
+"          ],\n"
+"          [\n"
+"            \"1st Avenue & 75th St. - New York NY  (W)\"\n"
+"          ],\n"
+"          [\n"
+"            \"West 43rd and Broadway - New York NY  (W)\"\n"
+"          ],\n"
+"          [\n"
+"            \"80th & York - New York NY  (W)\"\n"
+"          ],\n"
+"          [\n"
+"            \"Columbus @ 67th - New York NY  (W)\"\n"
+"          ],\n"
+"          [\n"
+"            \"45th & Broadway - New York NY  (W)\"\n"
+"          ],\n"
+"          [\n"
+"            \"92nd & 3rd - New York NY  (W)\"\n"
+"          ],\n"
+"          [\n"
+"            \"165 Broadway - 1 Liberty - New York NY  (W)\"\n"
+"          ],\n"
+"          [\n"
+"            \"19th & 8th - New York NY  (W)\"\n"
+"          ],\n"
+"          [\n"
+"            \"195 Broadway - New York NY  (W)\"\n"
+"          ],\n"
+"          [\n"
+"            \"70th & Broadway - New York NY  (W)\"\n"
+"          ],\n"
+"          [\n"
+"            \"42nd & Second - New York NY  (W)\"\n"
+"          ]\n"
+"        ]\n"
+"      }\n"
+"    }"
+msgstr ""
+
+msgid ""
+"Because the `count` says `40`, you know there are all 40 records in the table."
+" Search result records are returned as an array `records`."
+msgstr ""
+
+msgid ""
+"Next step, let's try more meaningful query. To search stores which contain \"Co"
+"lumbus\" in their name, give `Columbus` as the parameter `query`, and give `_ke"
+"y` as the parameter `match_to` which means the column to be searched. Then:"
+msgstr ""
+
+msgid ""
+"    $ curl \"http://192.168.100.50:10041/tables/Store?query=Columbus&match_to=_"
+"key&attributes=_key&limit=-1\"\n"
+"    {\n"
+"      \"stores\": {\n"
+"        \"count\": 2,\n"
+"        \"records\": [\n"
+"          [\n"
+"            \"Columbus @ 67th - New York NY  (W)\"\n"
+"          ],\n"
+"          [\n"
+"            \"2 Columbus Ave. - New York NY  (W)\"\n"
+"          ]\n"
+"        ]\n"
+"      }\n"
+"    }"
+msgstr ""
+
+msgid "As the result, two stores are found by the search condition."
+msgstr ""
+
+msgid ""
+"For more details of the Droonga HTTP Server, see the [reference manual][http-s"
+"erver]."
+msgstr ""
+
+msgid "## Conclusion"
+msgstr ""
+
+msgid ""
+"In this tutorial, you did setup both packages [droonga-engine][] and [droonga-"
+"http-server][] which construct [Droonga][] service on a [Ubuntu Linux][Ubuntu]"
+" or [CentOS][] computer.\n"
+"Moreover, you built a search system based on an HTTP protocol adapter with a D"
+"roonga engine, and successfully searched."
+msgstr ""
+
+msgid ""
+"  [http-server]: ../../reference/http-server/\n"
+"  [Ubuntu]: http://www.ubuntu.com/\n"
+"  [CentOS]: https://www.centos.org/\n"
+"  [Droonga]: https://droonga.org/\n"
+"  [droonga-engine]: https://github.com/droonga/droonga-engine\n"
+"  [droonga-http-server]: https://github.com/droonga/droonga-http-server\n"
+"  [Groonga]: http://groonga.org/\n"
+"  [Ruby]: http://www.ruby-lang.org/\n"
+"  [nvm]: https://github.com/creationix/nvm\n"
+"  [Socket.IO]: http://socket.io/\n"
+"  [Fluentd]: http://fluentd.org/\n"
+"  [Node.js]: http://nodejs.org/"
+msgstr ""

  Added: _po/ja/tutorial/1.1.0/benchmark/index.po (+1238 -0) 100644
===================================================================
--- /dev/null
+++ _po/ja/tutorial/1.1.0/benchmark/index.po    2014-11-30 23:20:40 +0900 (616c17a)
@@ -0,0 +1,1238 @@
+msgid ""
+msgstr ""
+"Project-Id-Version: PACKAGE VERSION\n"
+"PO-Revision-Date: 2014-11-30 23:19+0900\n"
+"Language: ja\n"
+"MIME-Version: 1.0\n"
+"Content-Type: text/plain; charset=UTF-8\n"
+"Content-Transfer-Encoding: 8bit\n"
+"Plural-Forms: nplurals=1; plural=0;\n"
+
+msgid ""
+"---\n"
+"title: \"How to benchmark Droonga with Groonga?\"\n"
+"layout: en\n"
+"---"
+msgstr ""
+
+msgid ""
+"* TOC\n"
+"{:toc}"
+msgstr ""
+
+msgid ""
+"<!--\n"
+"this is based on https://github.com/droonga/presentation-droonga-meetup-1-intr"
+"oduction/blob/master/benchmark/README.md\n"
+"-->"
+msgstr ""
+
+msgid "## The goal of this tutorial"
+msgstr ""
+
+msgid ""
+"Learning steps to benchmark a [Droonga][] cluster and compare it to a [Groonga"
+"][groonga] server."
+msgstr ""
+
+msgid "## Precondition"
+msgstr ""
+
+msgid ""
+"* You must have basic knowledge and experiences to set up and operate an [Ubun"
+"tu][] or [CentOS][] Server.\n"
+"* You must have basic knowledge and experiences to use the [Groonga][groonga] "
+"via HTTP.\n"
+"* You must have basic knowledge to construct a [Droonga][] cluster.\n"
+"  Please complete the [\"getting started\" tutorial](../groonga/) before this."
+msgstr ""
+
+msgid "## Why benchmarking?"
+msgstr ""
+
+msgid ""
+"Because Droonga has compatibility to Groonga, you'll plan to migrate your appl"
+"ication based on Groonga to Droonga.\n"
+"Before that, you should benchmark Droonga and confirm that it is better altern"
+"ative for your application."
+msgstr ""
+
+msgid ""
+"Of course you may simply hope to know the difference in performance between Gr"
+"oonga and Droonga.\n"
+"Benchmarking will make it clear."
+msgstr ""
+
+msgid "### How visualize the performance?"
+msgstr ""
+
+msgid "There are two major indexes to indicate performance of a system."
+msgstr ""
+
+msgid ""
+" * latency\n"
+" * throughput"
+msgstr ""
+
+msgid ""
+"Latency is the response time, actual elapsed time between two moments: when th"
+"e system receives a request, and when it returns a response.\n"
+"In other words, for clients, it is the time to wait for each request.\n"
+"At this index, the smaller is the better.\n"
+"In general, latency becomes small for lightweight queries, small size database"
+", or less clients."
+msgstr ""
+
+msgid ""
+"Throughput means how many request can be processed in a time.\n"
+"The performance index is described as \"*queries per second* (*qps*)\".\n"
+"For example, if a Groonga server processed 10 requests in one second, that is "
+"described as \"10qps\".\n"
+"Possibly there are 10 users (clients), or, there are 2 users and each user ope"
+"ns 5 tabs in his web browser.\n"
+"Anyway, \"10qps\" means that the Groonga actually accepted and responded for 10 "
+"requests while one second is passing."
+msgstr ""
+
+msgid ""
+"You can run benchmark with the command `drnbench-request-response`, introduced"
+" by the Gem package [drnbench]().\n"
+"It measures both latency and throughput of the target service."
+msgstr ""
+
+msgid "### How the benchmark tool measures the performance?"
+msgstr ""
+
+msgid ""
+"`drnbench-request-response` benchmarks the target service, by steps like follo"
+"wing:"
+msgstr ""
+
+msgid ""
+" 1. The master process generates one virtual client.\n"
+"    The client starts to send many requests to the target sequentially and fre"
+"quently.\n"
+" 2. After a while, the master process kills the client.\n"
+"    Then he calculates minimum, maximum, and average elapsed time, from respon"
+"se data.\n"
+"    And, he counts up the number of requests actually processed by the target,"
+" and reports it as \"qps\" of the single client case.\n"
+" 3. The master process generates two virtual clients.\n"
+"    They starts to send requests.\n"
+" 4. After a while, the master process kills all clients.\n"
+"    Then minimum, maximum, and average elapsed time is calculated, and total n"
+"umber of processed requests sent by all clients is reported as \"qps\" of the tw"
+"o clients case.\n"
+" 5. Repeated with three clients, four clients ... and more progressively.\n"
+" 6. Finally, the master process reports minimum/maximum/average elapsed time, "
+"\"qps\", and other extra information for each case, as a CSV file like:"
+msgstr ""
+
+msgid ""
+"    ~~~\n"
+"    n_clients,total_n_requests,queries_per_second,min_elapsed_time,max_elapsed"
+"_time,average_elapsed_time,200\n"
+"    1,996,33.2,0.001773766,0.238031643,0.019765581680722916,100.0\n"
+"    2,1973,65.76666666666667,0.001558398,0.272225481,0.020047345673086702,100."
+"0\n"
+"    4,3559,118.63333333333334,0.001531184,0.39942581,0.023357554419499882,100."
+"0\n"
+"    6,4540,151.33333333333334,0.001540704,0.501663069,0.042344890696916264,100"
+".0\n"
+"    8,4247,141.56666666666666,0.001483995,0.577100609,0.045836844514480835,100"
+".0\n"
+"    10,4466,148.86666666666667,0.001987089,0.604507078,0.06949704923846833,100"
+".0\n"
+"    12,4500,150.0,0.001782343,0.612596799,0.06902839555222215,100.0\n"
+"    14,4183,139.43333333333334,0.001980711,0.60754769,0.1033681068718623,100.0"
+"\n"
+"    16,4519,150.63333333333333,0.00284654,0.653204575,0.09473386513387955,100."
+"0\n"
+"    18,4362,145.4,0.002330049,0.640683693,0.12581190483929405,100.0\n"
+"    20,4228,140.93333333333334,0.003710795,0.662666076,0.1301649290901133,100."
+"0\n"
+"    ~~~"
+msgstr ""
+
+msgid "    You can analyze it, draw a graph from it, and so on."
+msgstr ""
+
+msgid ""
+"    (Note: Performance results fluctuate from various factors.\n"
+"    This is just an example on a specific version, specific environment.)"
+msgstr ""
+
+msgid "### How read and analyze the result? {#how-to-analyze}"
+msgstr ""
+
+msgid "Look at the result above."
+msgstr ""
+
+msgid "#### HTTP response statuses"
+msgstr ""
+
+msgid ""
+"See the last columns named `200`.\n"
+"It means the percentage of HTTP response statuses.\n"
+"`200` is \"OK\", `0` is \"timed out\".\n"
+"If clients got `400`, `500` and other errors, they will be also reported.\n"
+"These information will help you to detect unexpected slow down."
+msgstr ""
+
+msgid "#### Latency"
+msgstr ""
+
+msgid ""
+"Latency is easily analyzed - the smaller is the better.\n"
+"The minimum and average elapsed time becomes small if any cache system is work"
+"ing correctly on the target.\n"
+"The maximum time is affected by slow queries, system's page-in/page-out, unexp"
+"ected errors, and so on."
+msgstr ""
+
+msgid ""
+"A graph of latency also reveals the maximum number of effectively acceptable c"
+"onnections in same time."
+msgstr ""
+
+msgid "![A graph of latency](/images/tutorial/benchmark/latency-groonga-1.0.8.png)"
+msgstr ""
+
+msgid ""
+"This is a graph of `average_elapsed_time`.\n"
+"You'll see that the time is increased for over 4 clients.\n"
+"What it means?"
+msgstr ""
+
+msgid ""
+"Groonga can process multiple requests completely parallelly, until the number "
+"of available processors.\n"
+"When the computer has 4 processors, the system can process 4 or less requests "
+"in same time, without extra latency.\n"
+"And, if more requests are sent, 5th and later requests will be processed after"
+" a preceding request is processed.\n"
+"The graph confirms that the logical limitation is true."
+msgstr ""
+
+msgid "#### Throughput"
+msgstr ""
+
+msgid "A graph helps you to analyze throughput performance."
+msgstr ""
+
+msgid ""
+"![A graph of throughput](/images/tutorial/benchmark/throughput-groonga-1.0.8.p"
+"ng)"
+msgstr ""
+
+msgid ""
+"You'll see that the \"qps\" stagnated around 150, for 6 or more clients.\n"
+"This means that the target service can process 150 requests in one second, at "
+"a maximum."
+msgstr ""
+
+msgid ""
+"In other words, we can describe the result as: 150qps is the maximum throughpu"
+"t performance of this system - generic performance of hardware, software, netw"
+"ork, size of the database, queries, and more.\n"
+"If the number of requests for your service is growing up and it is going to re"
+"ach the limit, you have to do something about it - optimize queries, replace t"
+"he computer with more powerful one, and so on."
+msgstr ""
+
+msgid "#### Performance comparison"
+msgstr ""
+
+msgid ""
+"Sending same request patterns to Groonga and Droonga, you can compare performa"
+"nce of each system.\n"
+"If Droonga has better performance, it will become good reason to migrate your "
+"service from Groogna to Droonga."
+msgstr ""
+
+msgid ""
+"Moreover, comparing multiple results from different number of Droogna nodes, y"
+"ou can analyze the cost-benefit performance for newly introduced nodes."
+msgstr ""
+
+msgid "## Prepare environments for benchmarking"
+msgstr ""
+
+msgid ""
+"Assume that there are four [Ubuntu][] 14.04LTS servers for the new Droogna clu"
+"ster and they can resolve their names each other:"
+msgstr ""
+
+msgid ""
+" * `192.168.100.50`, the host name is `node0`\n"
+" * `192.168.100.51`, the host name is `node1`\n"
+" * `192.168.100.52`, the host name is `node2`\n"
+" * `192.168.100.53`, the host name is `node3`"
+msgstr ""
+
+msgid "One is client, others are Droonga nodes."
+msgstr ""
+
+msgid "### Ensure an existing reference database (and the data source)"
+msgstr ""
+
+msgid ""
+"If you have any existing service based on Groonga, it becomes the reference.\n"
+"Then you just have to dump all data in your Groonga database and load them to "
+"a new Droonga cluster."
+msgstr ""
+
+msgid ""
+"Otherwise - if you have no existing service, prepare a new reference database "
+"with much data for effective benchmark.\n"
+"The repository [wikipedia-search][] includes some helper scripts to construct "
+"your Groonga server (and Droonga cluster), with [Japanese Wikipedia](http://ja"
+".wikipedia.org/) pages."
+msgstr ""
+
+msgid ""
+"So let's prepare a new Groonga database including Wikipedia pages, on the `nod"
+"e0`."
+msgstr ""
+
+msgid ""
+" 1. Determine the size of the database.\n"
+"    You have to use good enough size database for benchmarking."
+msgstr ""
+
+msgid ""
+"    * If it is too small, you'll see \"too bad\" benchmark result for Droonga, b"
+"ecause the percentage of the Droonga's overhead becomes relatively too large.\n"
+"    * If it is too large, you'll see \"too unstable\" result because page-in and"
+" page-out of RAM will slow the performance down randomly.\n"
+"    * If RAM size of all nodes are different, you should determine the size of"
+" the database for the minimum size RAM."
+msgstr ""
+
+msgid ""
+"    For example, if there are three nodes `node0` (8GB RAM), `node1` (8GB RAM)"
+", and `node2` (6GB RAM), then the database should be smaller than 6GB.\n"
+" 2. Set up the Groonga server, as instructed on [the installation guide](http:"
+"//groonga.org/docs/install.html)."
+msgstr ""
+
+msgid ""
+"    ~~~\n"
+"    (on node0)\n"
+"    % sudo apt-get -y install software-properties-common\n"
+"    % sudo add-apt-repository -y universe\n"
+"    % sudo add-apt-repository -y ppa:groonga/ppa\n"
+"    % sudo apt-get update\n"
+"    % sudo apt-get -y install groonga\n"
+"    ~~~"
+msgstr ""
+
+msgid ""
+"    Then the Groonga becomes available.\n"
+" 3. Download the archive of Wikipedia pages and convert it to a dump file for "
+"Groonga, with the rake task `data:convert:groonga:ja`.\n"
+"    You can specify the number of records (pages) to be converted via the envi"
+"ronment variable `MAX_N_RECORDS` (default=5000)."
+msgstr ""
+
+msgid ""
+"    ~~~\n"
+"    (on node0)\n"
+"    % cd ~/\n"
+"    % git clone https://github.com/droonga/wikipedia-search.git\n"
+"    % cd wikipedia-search\n"
+"    % bundle install --path vendor/\n"
+"    % time (MAX_N_RECORDS=1500000 bundle exec rake data:convert:groonga:ja \\\n"
+"                                    data/groonga/ja-pages.grn)\n"
+"    ~~~"
+msgstr ""
+
+msgid ""
+"    Because the archive is very large, downloading and data conversion may tak"
+"e time."
+msgstr ""
+
+msgid ""
+"    After that, a dump file `~/wikipedia-search/data/groonga/ja-pages.grn` is "
+"there.\n"
+"    Create a new database and load the dump file to it.\n"
+"    This also may take more time:"
+msgstr ""
+
+msgid ""
+"    ~~~\n"
+"    (on node0)\n"
+"    % mkdir -p $HOME/groonga/db/\n"
+"    % groonga -n $HOME/groonga/db/db quit\n"
+"    % time (cat ~/wikipedia-search/config/groonga/schema.grn | groonga $HOME/g"
+"roonga/db/db)\n"
+"    % time (cat ~/wikipedia-search/config/groonga/indexes.grn | groonga $HOME/"
+"groonga/db/db)\n"
+"    % time (cat ~/wikipedia-search/data/groonga/ja-pages.grn | groonga $HOME/g"
+"roonga/db/db)\n"
+"    ~~~"
+msgstr ""
+
+msgid ""
+"    Note: number of records affects to the database size.\n"
+"    Just for information, my results are here:"
+msgstr ""
+
+msgid ""
+"     * 1.1GB database was constructed from 300000 records.\n"
+"       Data conversion took 17 min, data loading took 6 min.\n"
+"     * 4.3GB database was constructed from 1500000 records.\n"
+"       Data conversion took 53 min, data loading took 64 min."
+msgstr ""
+
+msgid " 4. Start the Groonga as an HTTP server."
+msgstr ""
+
+msgid ""
+"    ~~~\n"
+"    (on node0)\n"
+"    % groonga -p 10041 -d --protocol http $HOME/groonga/db/db\n"
+"    ~~~"
+msgstr ""
+
+msgid "OK, now we can use this node as the reference for benchmarking."
+msgstr ""
+
+msgid "### Set up a Droonga cluster"
+msgstr ""
+
+msgid ""
+"Install Droonga to all nodes.\n"
+"Because we are benchmarking it via HTTP, you have to install both services `dr"
+"oonga-engine` and `droonga-http-server` for each node."
+msgstr ""
+
+msgid ""
+"~~~\n"
+"(on node0)\n"
+"% host=node0\n"
+"% curl https://raw.githubusercontent.com/droonga/droonga-engine/master/install"
+".sh | \\\n"
+"    sudo HOST=$host bash\n"
+"% curl https://raw.githubusercontent.com/droonga/droonga-http-server/master/in"
+"stall.sh | \\\n"
+"    sudo ENGINE_HOST=$host HOST=$host PORT=10042 bash\n"
+"% sudo droonga-engine-catalog-generate \\\n"
+"    --hosts=node0,node1,node2\n"
+"% sudo service droonga-engine start\n"
+"% sudo service droonga-http-server start\n"
+"~~~"
+msgstr ""
+
+msgid ""
+"~~~\n"
+"(on node1)\n"
+"% host=node1\n"
+"...\n"
+"~~~"
+msgstr ""
+
+msgid ""
+"~~~\n"
+"(on node2)\n"
+"% host=node2\n"
+"...\n"
+"~~~"
+msgstr ""
+
+msgid ""
+"Note: to start `droonga-http-server` with a port number different from Groonga"
+", we should specify another port `10042` via the `PORT` environment variable, "
+"like above."
+msgstr ""
+
+msgid ""
+"Make sure that Droonga's HTTP server is actualy listening the port `10042` and"
+" it is working as a cluster with three nodes:"
+msgstr ""
+
+msgid ""
+"~~~\n"
+"(on node0)\n"
+"% sudo apt-get install -y jq\n"
+"% curl \"http://node0:10042/droonga/system/status\" | jq .\n"
+"{\n"
+"  \"nodes\": {\n"
+"    \"node0:10031/droonga\": {\n"
+"      \"live\": true\n"
+"    },\n"
+"    \"node1:10031/droonga\": {\n"
+"      \"live\": true\n"
+"    },\n"
+"    \"node2:10031/droonga\": {\n"
+"      \"live\": true\n"
+"    }\n"
+"  }\n"
+"}\n"
+"~~~"
+msgstr ""
+
+msgid "### Synchronize data from Groonga to Droonga"
+msgstr ""
+
+msgid "Next, prepare the Droonga database."
+msgstr ""
+
+msgid ""
+"You can generate messages for Droonga from Groonga's dump result, by the `grn2"
+"drn` command.\n"
+"Install `grn2drn` Gem package to activate the command, to the Groonga server c"
+"omputer."
+msgstr ""
+
+msgid ""
+"~~~\n"
+"(on node0)\n"
+"% sudo gem install grn2drn\n"
+"~~~"
+msgstr ""
+
+msgid ""
+"And, the `grndump` command introduced as a part of `rroonga` Gem package provi"
+"des ability to extract all data of an existing Groonga database, flexibly.\n"
+"If you are going to extract data from an existing Groonga server, you have to "
+"install `rroonga` before that."
+msgstr ""
+
+msgid ""
+"~~~\n"
+"(on Ubuntu server)\n"
+"% sudo apt-get -y install software-properties-common\n"
+"% sudo add-apt-repository -y universe\n"
+"% sudo add-apt-repository -y ppa:groonga/ppa\n"
+"% sudo apt-get update\n"
+"% sudo apt-get -y install libgroonga-dev\n"
+"% sudo gem install rroonga\n"
+"~~~"
+msgstr ""
+
+msgid ""
+"~~~\n"
+"(on CentOS server)\n"
+"# rpm -ivh http://packages.groonga.org/centos/groonga-release-1.1.0-1.noarch.r"
+"pm\n"
+"# yum -y makecache\n"
+"# yum -y ruby-devel groonga-devel\n"
+"# gem install rroonga\n"
+"~~~"
+msgstr ""
+
+msgid "Then dump schemas and data separately and load them to the Droonga cluster."
+msgstr ""
+
+msgid ""
+"~~~\n"
+"(on node0)\n"
+"% time (grndump --no-dump-tables $HOME/groonga/db/db | \\\n"
+"          grn2drn | \\\n"
+"          droonga-send --server=node0 \\\n"
+"                       --report-throughput)\n"
+"% time (grndump --no-dump-schema --no-dump-indexes $HOME/groonga/db/db | \\\n"
+"          grn2drn | \\\n"
+"          droonga-send --server=node0 \\\n"
+"                       --server=node1 \\\n"
+"                       --server=node2 \\\n"
+"                       --messages-per-second=100 \\\n"
+"                       --report-throughput)\n"
+"~~~"
+msgstr ""
+
+msgid ""
+"Note that you must send requests for schema and indexes to just one endpoint.\n"
+"Parallel sending of schema definition requests for multiple endpoints will bre"
+"ak the database, because Droonga cannot sort schema changing commands sent to "
+"each node in parallel."
+msgstr ""
+
+msgid ""
+"To reduce traffic and system load, you should specify maximum number of inpour"
+"ing messages per second by the `--messages-per-second` option.\n"
+"If too many messages rush into the Droonga cluster, they may overflow - Droong"
+"a may eat up the RAM and slow down the system."
+msgstr ""
+
+msgid ""
+"This may take much time.\n"
+"For example, with the option `--messages-per-second=100`, 1500000 records will"
+" be synchronized in about 4 hours (we can estimate the required time like: `15"
+"0000 / 100 / 60 / 60`)."
+msgstr ""
+
+msgid ""
+"After all, now you have two HTTP servers: Groonga HTTP server with the port `1"
+"0041`, and Droonga HTTP Servers with the port `10042`."
+msgstr ""
+
+msgid "### Set up the client"
+msgstr ""
+
+msgid "You must install the benchmark client to the computer."
+msgstr ""
+
+msgid "Assume that you use a computer `node3` as the client:"
+msgstr ""
+
+msgid ""
+"~~~\n"
+"(on node3)\n"
+"% sudo apt-get update\n"
+"% sudo apt-get -y upgrade\n"
+"% sudo apt-get install -y ruby curl jq\n"
+"% sudo gem install drnbench\n"
+"~~~"
+msgstr ""
+
+msgid "## Prepare request patterns"
+msgstr ""
+
+msgid "Let's prepare request pattern files for benchmarking."
+msgstr ""
+
+msgid "### Determine the expected cache hit rate"
+msgstr ""
+
+msgid "First, you have to determine the cache hit rate."
+msgstr ""
+
+msgid ""
+"If you have any existing service based on Groonga, you can get the actual cach"
+"e hit rate of the Groonga database via `status` command, like:"
+msgstr ""
+
+msgid ""
+"~~~\n"
+"% curl \"http://node0:10041/d/status\" | jq .\n"
+"[\n"
+"  [\n"
+"    0,\n"
+"    1412326645.19701,\n"
+"    3.76701354980469e-05\n"
+"  ],\n"
+"  {\n"
+"    \"max_command_version\": 2,\n"
+"    \"alloc_count\": 158,\n"
+"    \"starttime\": 1412326485,\n"
+"    \"uptime\": 160,\n"
+"    \"version\": \"4.0.6\",\n"
+"    \"n_queries\": 1000,\n"
+"    \"cache_hit_rate\": 0.5,\n"
+"    \"command_version\": 1,\n"
+"    \"default_command_version\": 1\n"
+"  }\n"
+"]\n"
+"~~~"
+msgstr ""
+
+msgid ""
+"The cache hit rate appears as `\"cache_hit_rate\"`.\n"
+"`0.5` means 50%, then a half of responses are returned from cached results."
+msgstr ""
+
+msgid ""
+"If you have no existing service, you should assume that the cache hit rate bec"
+"omes 50%."
+msgstr ""
+
+msgid ""
+"To measure and compare performance of Groonga and Droonga properly, you should"
+" prepare request patterns for benchmarking which make the cache hit rate near "
+"the actual rate.\n"
+"So, how do it?"
+msgstr ""
+
+msgid ""
+"You can control the cache hit rate by the number of unique request patterns, c"
+"alculated with the expression:\n"
+"`N = 100 / (cache hit rate)`, because Groonga and Droonga (`droonga-http-serve"
+"r`) cache 100 results at a maximum by default.\n"
+"When the expected cache hit rate is 50%, the number of unique requests is calc"
+"ulated as: `N = 100 / 0.5 = 200`"
+msgstr ""
+
+msgid ""
+"Note: if the actual rate is near zero, the number of unique requests becomes t"
+"oo huge!\n"
+"For such case you should carry up the rate to 0.01 (1%) or something."
+msgstr ""
+
+msgid "### Format of request patterns file"
+msgstr ""
+
+msgid ""
+"The format of the request patterns list for `drnbench-request-response` is the"
+" plain text, a list of request paths for the host.\n"
+"Here is a short example of requests for Groonga's `select` command:"
+msgstr ""
+
+msgid ""
+"~~~\n"
+"/d/select?command_version=2&table=Pages&limit=10&match_columns=title&output_co"
+"lumns=title&query=AAA\n"
+"/d/select?command_version=2&table=Pages&limit=10&match_columns=title&output_co"
+"lumns=title&query=BBB\n"
+"...\n"
+"~~~"
+msgstr ""
+
+msgid ""
+"If you have any existing service based on Groonga, the list should be generate"
+"d from the actual access log, query log, and so on.\n"
+"Patterns similar to actual requests will measure performance of your system mo"
+"re effectively.\n"
+"To generate 200 unique request patterns, you just have to collect 200 unique p"
+"aths from your log."
+msgstr ""
+
+msgid ""
+"Otherwise, you'll have to generate list of request paths from something.\n"
+"See the next section."
+msgstr ""
+
+msgid "### Prepare list of search terms"
+msgstr ""
+
+msgid ""
+"To generate 200 unique request patterns, you have to prepare 200 terms.\n"
+"Moreover, all of terms must be effective search term for the Groonga database."
+"\n"
+"If you use randomly generated terms (like `P2qyNJ9L`, `Hy4pLKc5`, `D5eftuTp`, "
+"...), you won't get effective benchmark result, because \"not found\" results wi"
+"ll be returned for most requests."
+msgstr ""
+
+msgid ""
+"So there is a utility command `drnbench-extract-searchterms`.\n"
+"It generates list of terms from Groonga's select result, like:"
+msgstr ""
+
+msgid ""
+"~~~\n"
+"% curl \"http://node0:10041/d/select?command_version=2&table=Pages&limit=10&out"
+"put_columns=title\" | \\\n"
+"    drnbench-extract-searchterms\n"
+"title1\n"
+"title2\n"
+"title3\n"
+"...\n"
+"title10\n"
+"~~~"
+msgstr ""
+
+msgid ""
+"`drnbench-extract-searchterms` extracts terms from the first column of records"
+".\n"
+"To collect 200 effective search terms, you just have to give a select result w"
+"ith an option `limit=200`."
+msgstr ""
+
+msgid "### Generate request pattern file from given terms"
+msgstr ""
+
+msgid ""
+"OK, let's generate request patterns by `drnbench-extract-searchterms`, from a "
+"select result."
+msgstr ""
+
+msgid ""
+"~~~\n"
+"% n_unique_requests=200\n"
+"% curl \"http://node0:10041/d/select?command_version=2&table=Pages&limit=$n_uni"
+"que_requests&output_columns=title\" | \\\n"
+"    drnbench-extract-searchterms --escape | \\\n"
+"    sed -r -e \"s;^;/d/select?command_version=2\\&table=Pages\\&limit=10\\&match_c"
+"olumns=title,text\\&output_columns=snippet_html(title),snippet_html(text),categ"
+"ories,_key\\&query_flags=NONE\\&sortby=title\\&drilldown=categories\\&drilldown_li"
+"mit=10\\&drilldown_output_columns=_id,_key,_nsubrecs\\&drilldown_sortby=_nsubrec"
+"s\\&query=;\" \\\n"
+"    > ./patterns.txt\n"
+"~~~"
+msgstr ""
+
+msgid "Note:"
+msgstr ""
+
+msgid ""
+" * You must escape `&` in the sed script with prefixed backslash, like `\\&`.\n"
+" * You should specify the `--escape` option for `drnbench-extract-searchterms`"
+".\n"
+"   It escapes characters unsafe for URI strings.\n"
+" * You should specify `query_flags=NONE` as a part of parameters, if you use s"
+"earch terms by the `query` parameter.\n"
+"   It forces ignoring of special characters in the `query` parameter, to Groon"
+"ga.\n"
+"   Otherwise you may see some errors from invalid queries."
+msgstr ""
+
+msgid "The generated file `patterns.txt` becomes like following:"
+msgstr ""
+
+msgid ""
+"~~~\n"
+"/d/select?command_version=2&table=Pages&limit=10&match_columns=title,text&outp"
+"ut_columns=snippet_html(title),snippet_html(text),categories,_key&query_flags="
+"NONE&sortby=title&drilldown=categories&drilldown_limit=10&drilldown_output_col"
+"umns=_id,_key,_nsubrecs&drilldown_sortby=_nsubrecs&query=AAA\n"
+"/d/select?command_version=2&table=Pages&limit=10&match_columns=title,text&outp"
+"ut_columns=snippet_html(title),snippet_html(text),categories,_key&query_flags="
+"NONE&sortby=title&drilldown=categories&drilldown_limit=10&drilldown_output_col"
+"umns=_id,_key,_nsubrecs&drilldown_sortby=_nsubrecs&query=BBB\n"
+"...\n"
+"~~~"
+msgstr ""
+
+msgid "## Run the benchmark"
+msgstr ""
+
+msgid ""
+"OK, it's ready to run.\n"
+"Let's benchmark Groonga and Droonga."
+msgstr ""
+
+msgid "### Benchmark Groonga"
+msgstr ""
+
+msgid ""
+"First, run benchmark for Groonga as the reference.\n"
+"Start Groonga's HTTP server before running, if you configured a node as a refe"
+"rence Groonga server and daemon is stopped."
+msgstr ""
+
+msgid ""
+"~~~\n"
+"(on node0)\n"
+"% groonga -p 10041 -d --protocol http $HOME/groonga/db/db\n"
+"~~~"
+msgstr ""
+
+msgid "You can run benchmark with the command `drnbench-request-response`, like:"
+msgstr ""
+
+msgid ""
+"~~~\n"
+"(on node3)\n"
+"% drnbench-request-response \\\n"
+"    --step=2 \\\n"
+"    --start-n-clients=0 \\\n"
+"    --end-n-clients=20 \\\n"
+"    --duration=30 \\\n"
+"    --interval=10 \\\n"
+"    --request-patterns-file=$PWD/patterns.txt \\\n"
+"    --default-hosts=node0 \\\n"
+"    --default-port=10041 \\\n"
+"    --output-path=$PWD/groonga-result.csv\n"
+"~~~"
+msgstr ""
+
+msgid "Important parameters are:"
+msgstr ""
+
+msgid ""
+" * `--step` is the number of virtual clients increased on each progress.\n"
+" * `--start-n-clients` is the initial number of virtual clients.\n"
+"   Even if you specify `0`, initially one client is always generated.\n"
+" * `--end-n-clients` is the maximum number of virtual clients.\n"
+"   Benchmark is performed progressively until the number of clients is reached"
+" to this limit.\n"
+" * `--duration` is the duration of each benchmark.\n"
+"   This should be long enough to average out the result.\n"
+"   `30` (seconds) seems good for my case.\n"
+" * `--interval` is the interval between each benchmark.\n"
+"   This should be long enough to finish previous benchmark.\n"
+"   `10` (seconds) seems good for my case.\n"
+" * `--request-patterns-file` is the path to the pattern file.\n"
+" * `--default-hosts` is the list of host names of target endpoints.\n"
+"   By specifying multiple hosts as a comma-separated list, you can simulate lo"
+"ad balancing.\n"
+" * `--default-port` is the port number of the target endpoint.\n"
+" * `--output-path` is the path to the result file.\n"
+"   Statistics of all benchmarks is saved as a file at the location."
+msgstr ""
+
+msgid ""
+"While running, you should monitor the system status of the `node0`, by `top` o"
+"r something.\n"
+"If the benchmark elicits Groonga's performance correctly, Groonga's process us"
+"es the CPU fully (for example, `400%` on a computer with 4 processors).\n"
+"Otherwise something wrong - for example, too narrow network, too low performan"
+"ce client."
+msgstr ""
+
+msgid "Then you'll get the reference result of the Groonga."
+msgstr ""
+
+msgid "To confirm the result is valid, check the response of the `status` command:"
+msgstr ""
+
+msgid ""
+"~~~\n"
+"% curl \"http://node0:10041/d/status\" | jq .\n"
+"[\n"
+"  [\n"
+"    0,\n"
+"    1412326645.19701,\n"
+"    3.76701354980469e-05\n"
+"  ],\n"
+"  {\n"
+"    \"max_command_version\": 2,\n"
+"    \"alloc_count\": 158,\n"
+"    \"starttime\": 1412326485,\n"
+"    \"uptime\": 160,\n"
+"    \"version\": \"4.0.6\",\n"
+"    \"n_queries\": 1000,\n"
+"    \"cache_hit_rate\": 0.49,\n"
+"    \"command_version\": 1,\n"
+"    \"default_command_version\": 1\n"
+"  }\n"
+"]\n"
+"~~~"
+msgstr ""
+
+msgid ""
+"Look at the value of `\"cache_hit_rate\"`.\n"
+"If it is far from the expected cache hit rate (ex. `0.5`), something wrong - f"
+"or example, too few request patterns.\n"
+"Too high cache hit rate produces too high throughput unexpectedly."
+msgstr ""
+
+msgid ""
+"After that you should stop Groonga to release CPU and RAM resources, if it is "
+"running on a Droonga node."
+msgstr ""
+
+msgid ""
+"~~~\n"
+"(on node0)\n"
+"% pkill groonga\n"
+"~~~"
+msgstr ""
+
+msgid "### Benchmark Droonga"
+msgstr ""
+
+msgid "#### Benchmark Droonga with single node"
+msgstr ""
+
+msgid "Before benchmarking, make your cluster with only one node."
+msgstr ""
+
+msgid ""
+"~~~\n"
+"(on node1, node2)\n"
+"% sudo service droonga-engine stop\n"
+"% sudo service droonga-http-server stop\n"
+"~~~"
+msgstr ""
+
+msgid ""
+"~~~\n"
+"(on node0)\n"
+"% sudo droonga-engine-catalog-generate \\\n"
+"    --hosts=node0\n"
+"% sudo service droonga-engine restart\n"
+"% sudo service droonga-http-server restart\n"
+"~~~"
+msgstr ""
+
+msgid ""
+"To clear effects from previous benchmark, you should restart services before e"
+"ach test."
+msgstr ""
+
+msgid ""
+"After that the endpoint `node0` works as a Droonga cluster with single node.\n"
+"Make sure that only one node is actually detected:"
+msgstr ""
+
+msgid ""
+"~~~\n"
+"(on node3)\n"
+"% curl \"http://node0:10042/droonga/system/status\" | jq .\n"
+"{\n"
+"  \"nodes\": {\n"
+"    \"node0:10031/droonga\": {\n"
+"      \"live\": true\n"
+"    }\n"
+"  }\n"
+"}\n"
+"~~~"
+msgstr ""
+
+msgid "Run the benchmark."
+msgstr ""
+
+msgid ""
+"~~~\n"
+"(on node3)\n"
+"% drnbench-request-response \\\n"
+"    --step=2 \\\n"
+"    --start-n-clients=0 \\\n"
+"    --end-n-clients=20 \\\n"
+"    --duration=30 \\\n"
+"    --interval=10 \\\n"
+"    --request-patterns-file=$PWD/patterns.txt \\\n"
+"    --default-hosts=node0 \\\n"
+"    --default-port=10042 \\\n"
+"    --output-path=$PWD/droonga-result-1node.csv\n"
+"~~~"
+msgstr ""
+
+msgid ""
+"Note that the default port is changed from `10041` (Groonga's HTTP server) to "
+"`10042` (Droonga).\n"
+"Moreover, the path to the result file also changed."
+msgstr ""
+
+msgid ""
+"While running, you should monitor the system status of the `node0`, by `top` o"
+"r something.\n"
+"It may help you to analyze what is the bottleneck."
+msgstr ""
+
+msgid ""
+"And, to confirm the result is valid, you should check the actual cache hit rat"
+"e:"
+msgstr ""
+
+msgid ""
+"~~~\n"
+"% curl \"http://node0:10042/statistics/cache\" | jq .\n"
+"{\n"
+"  \"hitRatio\": 49.830717830807124,\n"
+"  \"nHits\": 66968,\n"
+"  \"nGets\": 134391\n"
+"}\n"
+"~~~"
+msgstr ""
+
+msgid ""
+"Look at the value of `\"hitRatio\"`.\n"
+"Actual cache hit rate of the HTTP server is reported in percentage like above "
+"(the value `49.830717830807124` means `49.830717830807124%`.)\n"
+"If it is far from the expected cache hit rate, something wrong."
+msgstr ""
+
+msgid "#### Benchmark Droonga with two nodes"
+msgstr ""
+
+msgid "Before benchmarking, join the second node to the cluster."
+msgstr ""
+
+msgid ""
+"~~~\n"
+"(on node0, node1)\n"
+"% sudo droonga-engine-catalog-generate \\\n"
+"    --hosts=node0,node1\n"
+"% sudo service droonga-engine restart\n"
+"% sudo service droonga-http-server restart\n"
+"~~~"
+msgstr ""
+
+msgid ""
+"After that both endpoints `node0` and `node1` work as a Droonga cluster with t"
+"wo nodes.\n"
+"Make sure that two nodes are actually detected:"
+msgstr ""
+
+msgid ""
+"~~~\n"
+"(on node3)\n"
+"% curl \"http://node0:10042/droonga/system/status\" | jq .\n"
+"{\n"
+"  \"nodes\": {\n"
+"    \"node0:10031/droonga\": {\n"
+"      \"live\": true\n"
+"    },\n"
+"    \"node1:10031/droonga\": {\n"
+"      \"live\": true\n"
+"    }\n"
+"  }\n"
+"}\n"
+"~~~"
+msgstr ""
+
+msgid ""
+"~~~\n"
+"(on node3)\n"
+"% drnbench-request-response \\\n"
+"    --step=2 \\\n"
+"    --start-n-clients=0 \\\n"
+"    --end-n-clients=20 \\\n"
+"    --duration=30 \\\n"
+"    --interval=10 \\\n"
+"    --request-patterns-file=$PWD/patterns.txt \\\n"
+"    --default-hosts=node0,node1 \\\n"
+"    --default-port=10042 \\\n"
+"    --output-path=$PWD/droonga-result-2nodes.csv\n"
+"~~~"
+msgstr ""
+
+msgid "Note that two hosts are specified via the `--default-hosts` option."
+msgstr ""
+
+msgid ""
+"If you send all requests to single endpoint, `droonga-http-server` will become"
+" a bottleneck, because it works as a single process for now.\n"
+"Moreover, `droonga-http-server` and `droonga-engine` will scramble for CPU res"
+"ources.\n"
+"To measure the performance of your Droonga cluster effectively, you should ave"
+"rage out CPU load per capita."
+msgstr ""
+
+msgid ""
+"Of course, on the production environment, it should be done by a load balancer"
+", but It's a hassle to set up a load balancer for just benchmarking.\n"
+"Instead, you can specify multiple endpoint host names as a comma-separated lis"
+"t for the `--default-hosts` option."
+msgstr ""
+
+msgid "And, the path to the result file also changed."
+msgstr ""
+
+msgid ""
+"Don't forget to monitor system status of both nodes while benchmarking.\n"
+"If only one node is busy and another is idling, something wrong - for example,"
+" they are not working as a cluster.\n"
+"You also must check the actual cache hit rate of all nodes."
+msgstr ""
+
+msgid "#### Benchmark Droonga with three nodes"
+msgstr ""
+
+msgid "Before benchmarking, join the last node to the cluster."
+msgstr ""
+
+msgid ""
+"~~~\n"
+"(on node0, node1)\n"
+"% sudo droonga-engine-catalog-generate \\\n"
+"    --hosts=node0,node1,node2\n"
+"% sudo service droonga-engine restart\n"
+"% sudo service droonga-http-server restart\n"
+"~~~"
+msgstr ""
+
+msgid ""
+"After that all endpoints `node0`, `node1`, and `node2` work as a Droonga clust"
+"er with three nodes.\n"
+"Make sure that three nodes are actually detected:"
+msgstr ""
+
+msgid ""
+"~~~\n"
+"(on node3)\n"
+"% curl \"http://node0:10042/droonga/system/status\" | jq .\n"
+"{\n"
+"  \"nodes\": {\n"
+"    \"node0:10031/droonga\": {\n"
+"      \"live\": true\n"
+"    },\n"
+"    \"node1:10031/droonga\": {\n"
+"      \"live\": true\n"
+"    },\n"
+"    \"node2:10031/droonga\": {\n"
+"      \"live\": true\n"
+"    }\n"
+"  }\n"
+"}\n"
+"~~~"
+msgstr ""
+
+msgid ""
+"~~~\n"
+"(on node3)\n"
+"% drnbench-request-response \\\n"
+"    --step=2 \\\n"
+"    --start-n-clients=0 \\\n"
+"    --end-n-clients=20 \\\n"
+"    --duration=30 \\\n"
+"    --interval=10 \\\n"
+"    --request-patterns-file=$PWD/patterns.txt \\\n"
+"    --default-hosts=node0,node1,node2 \\\n"
+"    --default-port=10042 \\\n"
+"    --output-path=$PWD/droonga-result-3nodes.csv\n"
+"~~~"
+msgstr ""
+
+msgid ""
+"Note that both `--default-hosts` and `--output-path` are changed again.\n"
+"Monitoring of system status and checking cache hit rate of all nodes are also "
+"important."
+msgstr ""
+
+msgid "## Analyze the result"
+msgstr ""
+
+msgid "OK, now you have four results:"
+msgstr ""
+
+msgid ""
+" * `groonga-result.csv`\n"
+" * `droonga-result-1node.csv`\n"
+" * `droonga-result-2nodes.csv`\n"
+" * `droonga-result-3nodes.csv`"
+msgstr ""
+
+msgid "[As described](#how-to-analyze), you can analyze them."
+msgstr ""
+
+msgid "For example, you can plot a graph from these results like:"
+msgstr ""
+
+msgid ""
+"![A layered graph of latency](/images/tutorial/benchmark/latency-mixed-1.0.8.p"
+"ng)"
+msgstr ""
+
+msgid "You can explain this graph of latency as:"
+msgstr ""
+
+msgid ""
+" * Minimum latency of Droonga is larger than Groonga.\n"
+"   There are some overhead in Droonga.\n"
+" * Latency of multiple nodes Droonga slowly increases than Groonga.\n"
+"   Droonga can process more requests in same time without extra waiting time."
+msgstr ""
+
+msgid ""
+"![A layered graph of throughput](/images/tutorial/benchmark/throughput-mixed-1"
+".0.8.png)"
+msgstr ""
+
+msgid "You can explain this graph of throughput as:"
+msgstr ""
+
+msgid ""
+" * Graphs of Groonga and single node Droonga are alike.\n"
+"   There is less performance loss between Groonga and Droonga.\n"
+" * Maximum throughput of Droonga is incdeased by number of nodes."
+msgstr ""
+
+msgid ""
+"(Note: Performance results fluctuate from various factors.\n"
+"This graph is just an example on a specific version, specific environment.)"
+msgstr ""
+
+msgid "## Conclusion"
+msgstr ""
+
+msgid ""
+"In this tutorial, you did prepare a reference [Groonga][] server and [Droonga]"
+"[] cluster.\n"
+"And, you studied how to prepare request patterns, how measure your systems, an"
+"d how analyze the result."
+msgstr ""
+
+msgid ""
+"  [Ubuntu]: http://www.ubuntu.com/\n"
+"  [CentOS]: https://www.centos.org/\n"
+"  [Droonga]: https://droonga.org/\n"
+"  [Groonga]: http://groonga.org/\n"
+"  [drnbench]: https://github.com/droonga/drnbench/\n"
+"  [wikipedia-search]: https://github.com/droonga/wikipedia-search/\n"
+"  [command reference]: ../../reference/commands/"
+msgstr ""

  Added: _po/ja/tutorial/1.1.0/dump-restore/index.po (+747 -0) 100644
===================================================================
--- /dev/null
+++ _po/ja/tutorial/1.1.0/dump-restore/index.po    2014-11-30 23:20:40 +0900 (fdf6302)
@@ -0,0 +1,747 @@
+msgid ""
+msgstr ""
+"Project-Id-Version: PACKAGE VERSION\n"
+"PO-Revision-Date: 2014-11-30 23:19+0900\n"
+"Language: ja\n"
+"MIME-Version: 1.0\n"
+"Content-Type: text/plain; charset=UTF-8\n"
+"Content-Transfer-Encoding: 8bit\n"
+"Plural-Forms: nplurals=1; plural=0;\n"
+
+msgid ""
+"---\n"
+"title: \"Droonga tutorial: How to backup and restore the database?\"\n"
+"layout: en\n"
+"---"
+msgstr ""
+
+msgid ""
+"* TOC\n"
+"{:toc}"
+msgstr ""
+
+msgid "## The goal of this tutorial"
+msgstr ""
+
+msgid "Learning steps to backup and restore data by your hand."
+msgstr ""
+
+msgid "## Precondition"
+msgstr ""
+
+msgid ""
+"* You must have an existing [Droonga][] cluster with some data.\n"
+"  Please complete the [\"getting started\" tutorial](../groonga/) before this."
+msgstr ""
+
+msgid ""
+"This tutorial assumes that there are two existing Droonga nodes prepared by th"
+"e [previous tutorial](../groonga/): `node0` (`192.168.100.50`) and `node1` (`1"
+"92.168.100.51`), and there is another computer `node2` (`192.168.100.52`) as a"
+" working environment.\n"
+"If you have Droonga nodes with other names, read `node0`, `node1` and `node2` "
+"in following descriptions as yours."
+msgstr ""
+
+msgid "## Backup data in a Droonga cluster"
+msgstr ""
+
+msgid "### Install `drndump`"
+msgstr ""
+
+msgid ""
+"First, install a command line tool named `drndump` via rubygems, to the workin"
+"g machine `node2`:"
+msgstr ""
+
+msgid ""
+"~~~\n"
+"# gem install drndump\n"
+"~~~"
+msgstr ""
+
+msgid ""
+"After that, establish that the `drndump` command has been installed successful"
+"ly:"
+msgstr ""
+
+msgid ""
+"~~~\n"
+"$ drndump --version\n"
+"drndump 1.0.0\n"
+"~~~"
+msgstr ""
+
+msgid "### Dump all data in a Droonga cluster"
+msgstr ""
+
+msgid ""
+"The `drndump` command extracts all schema and data as JSONs.\n"
+"Let's dump contents of existing your Droonga cluster."
+msgstr ""
+
+msgid ""
+"For example, if your cluster is constructed from two nodes `node0` (`192.168.1"
+"00.50`) and `node1` (`192.168.100.51`), and now you are logged in to new anoth"
+"er computer `node2` (`192.168.100.52`). then the command line is:"
+msgstr ""
+
+msgid ""
+"~~~\n"
+"# drndump --host=node0 \\\n"
+"           --receiver-host=node2\n"
+"{\n"
+"  \"type\": \"table_create\",\n"
+"  \"dataset\": \"Default\",\n"
+"  \"body\": {\n"
+"    \"name\": \"Location\",\n"
+"    \"flags\": \"TABLE_PAT_KEY\",\n"
+"    \"key_type\": \"WGS84GeoPoint\"\n"
+"  }\n"
+"}\n"
+"...\n"
+"{\n"
+"  \"dataset\": \"Default\",\n"
+"  \"body\": {\n"
+"    \"table\": \"Store\",\n"
+"    \"key\": \"store9\",\n"
+"    \"values\": {\n"
+"      \"location\": \"146702531x-266363233\",\n"
+"      \"name\": \"Macy's 6th Floor - Herald Square - New York NY  (W)\"\n"
+"    }\n"
+"  },\n"
+"  \"type\": \"add\"\n"
+"}\n"
+"{\n"
+"  \"type\": \"column_create\",\n"
+"  \"dataset\": \"Default\",\n"
+"  \"body\": {\n"
+"    \"table\": \"Location\",\n"
+"    \"name\": \"store\",\n"
+"    \"type\": \"Store\",\n"
+"    \"flags\": \"COLUMN_INDEX\",\n"
+"    \"source\": \"location\"\n"
+"  }\n"
+"}\n"
+"{\n"
+"  \"type\": \"column_create\",\n"
+"  \"dataset\": \"Default\",\n"
+"  \"body\": {\n"
+"    \"table\": \"Term\",\n"
+"    \"name\": \"store_name\",\n"
+"    \"type\": \"Store\",\n"
+"    \"flags\": \"COLUMN_INDEX|WITH_POSITION\",\n"
+"    \"source\": \"name\"\n"
+"  }\n"
+"}\n"
+"~~~"
+msgstr ""
+
+msgid "Note to these things:"
+msgstr ""
+
+msgid ""
+" * You must specify valid host name of one of nodes in the cluster, via the op"
+"tion `--host`.\n"
+" * You must specify valid host name or IP address of the computer you are logg"
+"ed in, via the option `--receiver-host`.\n"
+"   It is used by the Droonga cluster, to send response messages.\n"
+" * The result includes complete commands to construct a dataset, same to the s"
+"ource."
+msgstr ""
+
+msgid ""
+"The result is printed to the standard output.\n"
+"To save it as a JSONs file, you'll use a redirection like:"
+msgstr ""
+
+msgid ""
+"~~~\n"
+"$ drndump --host=node0 \\\n"
+"          --receiver-host=node2 \\\n"
+"    > dump.jsons\n"
+"~~~"
+msgstr ""
+
+msgid "## Restore data to a Droonga cluster"
+msgstr ""
+
+msgid "### Install `droonga-client`"
+msgstr ""
+
+msgid "The result of `drndump` command is a list of Droonga messages."
+msgstr ""
+
+msgid ""
+"You need to use `droonga-send` command to send it to your Droogna cluster.\n"
+"Install the command included in the package `droonga-client`, via rubygems, to"
+" the working machine `node2`:"
+msgstr ""
+
+msgid ""
+"~~~\n"
+"# gem install droonga-client\n"
+"~~~"
+msgstr ""
+
+msgid ""
+"After that, establish that the `droonga-send` command has been installed succe"
+"ssfully:"
+msgstr ""
+
+msgid ""
+"~~~\n"
+"$ droonga-send --version\n"
+"droonga-send 0.2.0\n"
+"~~~"
+msgstr ""
+
+msgid "### Prepare an empty Droonga cluster"
+msgstr ""
+
+msgid ""
+"Assume that there is an empty Droonga cluster constructed from two nodes `node"
+"0` (`192.168.100.50`) and `node1` (`192.168.100.51`), now you are logged in to"
+" the host `node2` (`192.168.100.52`), and there is a dump file `dump.jsons`."
+msgstr ""
+
+msgid ""
+"If you are reading this tutorial sequentially, you'll have an existing cluster"
+" and the dump file.\n"
+"Make it empty with these commands:"
+msgstr ""
+
+msgid ""
+"~~~\n"
+"$ endpoint=\"http://node0:10041\"\n"
+"$ curl \"$endpoint/d/table_remove?name=Location\" | jq \".\"\n"
+"[\n"
+"  [\n"
+"    0,\n"
+"    1406610703.2229023,\n"
+"    0.0010793209075927734\n"
+"  ],\n"
+"  true\n"
+"]\n"
+"$ curl \"$endpoint/d/table_remove?name=Store\" | jq \".\"\n"
+"[\n"
+"  [\n"
+"    0,\n"
+"    1406610708.2757723,\n"
+"    0.006396293640136719\n"
+"  ],\n"
+"  true\n"
+"]\n"
+"$ curl \"$endpoint/d/table_remove?name=Term\" | jq \".\"\n"
+"[\n"
+"  [\n"
+"    0,\n"
+"    1406610712.379644,\n"
+"    6.723403930664062e-05\n"
+"  ],\n"
+"  true\n"
+"]\n"
+"~~~"
+msgstr ""
+
+msgid ""
+"After that the cluster becomes empty.\n"
+"Let's confirm it.\n"
+"You'll see empty result by `select` and `table_list` commands, like:"
+msgstr ""
+
+msgid ""
+"~~~\n"
+"$ curl \"$endpoint/d/table_list\" | jq \".\"\n"
+"[\n"
+"  [\n"
+"    0,\n"
+"    1406610804.1535122,\n"
+"    0.0002875328063964844\n"
+"  ],\n"
+"  [\n"
+"    [\n"
+"      [\n"
+"        \"id\",\n"
+"        \"UInt32\"\n"
+"      ],\n"
+"      [\n"
+"        \"name\",\n"
+"        \"ShortText\"\n"
+"      ],\n"
+"      [\n"
+"        \"path\",\n"
+"        \"ShortText\"\n"
+"      ],\n"
+"      [\n"
+"        \"flags\",\n"
+"        \"ShortText\"\n"
+"      ],\n"
+"      [\n"
+"        \"domain\",\n"
+"        \"ShortText\"\n"
+"      ],\n"
+"      [\n"
+"        \"range\",\n"
+"        \"ShortText\"\n"
+"      ],\n"
+"      [\n"
+"        \"default_tokenizer\",\n"
+"        \"ShortText\"\n"
+"      ],\n"
+"      [\n"
+"        \"normalizer\",\n"
+"        \"ShortText\"\n"
+"      ]\n"
+"    ]\n"
+"  ]\n"
+"]\n"
+"$ curl -X DELETE \"$endpoint/cache\" | jq \".\"\n"
+"true\n"
+"$ curl \"$endpoint/d/select?table=Store&output_columns=name&limit=10\" | jq \".\"\n"
+"[\n"
+"  [\n"
+"    0,\n"
+"    1401363465.610241,\n"
+"    0\n"
+"  ],\n"
+"  [\n"
+"    [\n"
+"      [\n"
+"        null\n"
+"      ],\n"
+"      []\n"
+"    ]\n"
+"  ]\n"
+"]\n"
+"~~~"
+msgstr ""
+
+msgid ""
+"Note, clear the response cache before sending a request for the `select` comma"
+"nd.\n"
+"Otherwise you'll see unexpected cached result based on old configurations."
+msgstr ""
+
+msgid ""
+"Response caches are stored for recent 100 requests, and their lifetime is 1 mi"
+"nute, by default.\n"
+"You can clear all response caches manually by sending an HTTP `DELETE` request"
+" to the path `/cache`, like above."
+msgstr ""
+
+msgid "### Restore data from a dump result, to an empty Droonga cluster"
+msgstr ""
+
+msgid ""
+"Because the result of the `drndump` command includes complete information to c"
+"onstruct a dataset same to the source, you can re-construct your cluster from "
+"a dump file, even if the cluster is broken.\n"
+"You just have to pour the contents of the dump file to an empty cluster, by th"
+"e `droonga-send` command."
+msgstr ""
+
+msgid "To restore the cluster from the dump file, run a command line like:"
+msgstr ""
+
+msgid ""
+"~~~\n"
+"$ droonga-send --server=node0  \\\n"
+"                    dump.jsons\n"
+"~~~"
+msgstr ""
+
+msgid "Note:"
+msgstr ""
+
+msgid ""
+" * You must specify valid host name or IP address of one of nodes in the clust"
+"er, via the option `--server`."
+msgstr ""
+
+msgid "Then the data is completely restored. Confirm it:"
+msgstr ""
+
+msgid ""
+"~~~\n"
+"$ curl -X DELETE \"$endpoint/cache\" | jq \".\"\n"
+"true\n"
+"$ curl \"$endpoint/d/select?table=Store&output_columns=name&limit=10\" | jq \".\"\n"
+"[\n"
+"  [\n"
+"    0,\n"
+"    1401363556.0294158,\n"
+"    7.62939453125e-05\n"
+"  ],\n"
+"  [\n"
+"    [\n"
+"      [\n"
+"        40\n"
+"      ],\n"
+"      [\n"
+"        [\n"
+"          \"name\",\n"
+"          \"ShortText\"\n"
+"        ]\n"
+"      ],\n"
+"      [\n"
+"        \"1st Avenue & 75th St. - New York NY  (W)\"\n"
+"      ],\n"
+"      [\n"
+"        \"76th & Second - New York NY  (W)\"\n"
+"      ],\n"
+"      [\n"
+"        \"Herald Square- Macy's - New York NY\"\n"
+"      ],\n"
+"      [\n"
+"        \"Macy's 5th Floor - Herald Square - New York NY  (W)\"\n"
+"      ],\n"
+"      [\n"
+"        \"80th & York - New York NY  (W)\"\n"
+"      ],\n"
+"      [\n"
+"        \"Columbus @ 67th - New York NY  (W)\"\n"
+"      ],\n"
+"      [\n"
+"        \"45th & Broadway - New York NY  (W)\"\n"
+"      ],\n"
+"      [\n"
+"        \"Marriott Marquis - Lobby - New York NY\"\n"
+"      ],\n"
+"      [\n"
+"        \"Second @ 81st - New York NY  (W)\"\n"
+"      ],\n"
+"      [\n"
+"        \"52nd & Seventh - New York NY  (W)\"\n"
+"      ]\n"
+"    ]\n"
+"  ]\n"
+"]\n"
+"~~~"
+msgstr ""
+
+msgid "## Duplicate an existing Droonga cluster to another empty cluster directly"
+msgstr ""
+
+msgid ""
+"If you have multiple Droonga clusters, then you can duplicate one to another.\n"
+"For this purpose, the package `droonga-engine` includes a utility command `dro"
+"onga-engine-absorb-data`.\n"
+"It copies all data from an existing cluster to another one directly, so it is "
+"recommended if you don't need to save dump file locally."
+msgstr ""
+
+msgid "### Prepare multiple Droonga clusters"
+msgstr ""
+
+msgid ""
+"Assume that there are two clusters: the source has a node `node0` (`192.168.10"
+"0.50`), and the destination has a node `node1' (`192.168.100.51`)."
+msgstr ""
+
+msgid ""
+"If you are reading this tutorial sequentially, you'll have an existing cluster"
+" with two nodes.\n"
+"Construct two clusters by `droonga-engine-catalog-modify` and make one cluster"
+" empty, with these commands:"
+msgstr ""
+
+msgid ""
+"~~~\n"
+"(on node0)\n"
+"# droonga-engine-catalog-modify --replica-hosts=node0\n"
+"~~~"
+msgstr ""
+
+msgid ""
+"~~~\n"
+"(on node1)\n"
+"# droonga-engine-catalog-modify --replica-hosts=node1\n"
+"$ endpoint=\"http://node1:10041\"\n"
+"$ curl \"$endpoint/d/table_remove?name=Location\"\n"
+"$ curl \"$endpoint/d/table_remove?name=Store\"\n"
+"$ curl \"$endpoint/d/table_remove?name=Term\"\n"
+"~~~"
+msgstr ""
+
+msgid ""
+"After that there are two clusters: one contains `node0` with data, another con"
+"tains `node1` with no data. Confirm it:"
+msgstr ""
+
+msgid ""
+"~~~\n"
+"$ curl \"http://node0:10041/droonga/system/status\" | jq \".\"\n"
+"{\n"
+"  \"nodes\": {\n"
+"    \"node0:10031/droonga\": {\n"
+"      \"live\": true\n"
+"    }\n"
+"  }\n"
+"}\n"
+"$ curl -X DELETE \"http://node0:10041/cache\" | jq \".\"\n"
+"true\n"
+"$ curl \"http://node0:10041/d/select?table=Store&output_columns=name&limit=10\" "
+"| jq \".\"\n"
+"[\n"
+"  [\n"
+"    0,\n"
+"    1401363556.0294158,\n"
+"    7.62939453125e-05\n"
+"  ],\n"
+"  [\n"
+"    [\n"
+"      [\n"
+"        40\n"
+"      ],\n"
+"      [\n"
+"        [\n"
+"          \"name\",\n"
+"          \"ShortText\"\n"
+"        ]\n"
+"      ],\n"
+"      [\n"
+"        \"1st Avenue & 75th St. - New York NY  (W)\"\n"
+"      ],\n"
+"      [\n"
+"        \"76th & Second - New York NY  (W)\"\n"
+"      ],\n"
+"      [\n"
+"        \"Herald Square- Macy's - New York NY\"\n"
+"      ],\n"
+"      [\n"
+"        \"Macy's 5th Floor - Herald Square - New York NY  (W)\"\n"
+"      ],\n"
+"      [\n"
+"        \"80th & York - New York NY  (W)\"\n"
+"      ],\n"
+"      [\n"
+"        \"Columbus @ 67th - New York NY  (W)\"\n"
+"      ],\n"
+"      [\n"
+"        \"45th & Broadway - New York NY  (W)\"\n"
+"      ],\n"
+"      [\n"
+"        \"Marriott Marquis - Lobby - New York NY\"\n"
+"      ],\n"
+"      [\n"
+"        \"Second @ 81st - New York NY  (W)\"\n"
+"      ],\n"
+"      [\n"
+"        \"52nd & Seventh - New York NY  (W)\"\n"
+"      ]\n"
+"    ]\n"
+"  ]\n"
+"]\n"
+"$ curl \"http://node1:10041/droonga/system/status\" | jq \".\"\n"
+"{\n"
+"  \"nodes\": {\n"
+"    \"node1:10031/droonga\": {\n"
+"      \"live\": true\n"
+"    }\n"
+"  }\n"
+"}\n"
+"$ curl -X DELETE \"http://node1:10041/cache\" | jq \".\"\n"
+"true\n"
+"$ curl \"http://node1:10041/d/select?table=Store&output_columns=name&limit=10\" "
+"| jq \".\"\n"
+"[\n"
+"  [\n"
+"    0,\n"
+"    1401363465.610241,\n"
+"    0\n"
+"  ],\n"
+"  [\n"
+"    [\n"
+"      [\n"
+"        null\n"
+"      ],\n"
+"      []\n"
+"    ]\n"
+"  ]\n"
+"]\n"
+"~~~"
+msgstr ""
+
+msgid ""
+"Note, `droonga-http-server` is associated to the `droonga-engine` working on s"
+"ame computer.\n"
+"After you split the cluster like above, `droonga-http-server` on `node0` commu"
+"nicates only with `droonga-engine` on `node0`, `droonga-http-server` on `node1"
+"` communicates only with `droonga-engine` on `node1`.\n"
+"See also the next tutorial for more details."
+msgstr ""
+
+msgid "### Duplicate data between two Droonga clusters"
+msgstr ""
+
+msgid ""
+"To copy data between two clusters, run the `droonga-engine-absorb-data` comman"
+"d on a node, like:"
+msgstr ""
+
+msgid ""
+"~~~\n"
+"(on node1)\n"
+"$ droonga-engine-absorb-data --source-host=node0 \\\n"
+"                             --destination-host=node1 \\\n"
+"                             --receiver-host=node1\n"
+"Start to absorb data from node0\n"
+"                       to node1\n"
+"                      via node1 (this host)\n"
+"  dataset = Default\n"
+"  port    = 10031\n"
+"  tag     = droonga"
+msgstr ""
+
+msgid ""
+"Absorbing...\n"
+"...\n"
+"Done.\n"
+"~~~"
+msgstr ""
+
+msgid "You can run the command on different node, like:"
+msgstr ""
+
+msgid ""
+"~~~\n"
+"(on node2)\n"
+"$ droonga-engine-absorb-data --source-host=node0 \\\n"
+"                             --destination-host=node1 \\\n"
+"                             --receiver-host=node2\n"
+"Start to absorb data from node0\n"
+"                       to node1\n"
+"                      via node2 (this host)\n"
+"...\n"
+"~~~"
+msgstr ""
+
+msgid ""
+"Note that you must specify the host name (or the IP address) of the working ma"
+"chine via the `--receiver-host` option."
+msgstr ""
+
+msgid ""
+"After that contents of these two clusters are completely synchronized. Confirm"
+" it:"
+msgstr ""
+
+msgid ""
+"~~~\n"
+"$ curl -X DELETE \"http://node1:10041/cache\" | jq \".\"\n"
+"true\n"
+"$ curl \"http://node1:10041/d/select?table=Store&output_columns=name&limit=10\" "
+"| jq \".\"\n"
+"[\n"
+"  [\n"
+"    0,\n"
+"    1401363556.0294158,\n"
+"    7.62939453125e-05\n"
+"  ],\n"
+"  [\n"
+"    [\n"
+"      [\n"
+"        40\n"
+"      ],\n"
+"      [\n"
+"        [\n"
+"          \"name\",\n"
+"          \"ShortText\"\n"
+"        ]\n"
+"      ],\n"
+"      [\n"
+"        \"1st Avenue & 75th St. - New York NY  (W)\"\n"
+"      ],\n"
+"      [\n"
+"        \"76th & Second - New York NY  (W)\"\n"
+"      ],\n"
+"      [\n"
+"        \"Herald Square- Macy's - New York NY\"\n"
+"      ],\n"
+"      [\n"
+"        \"Macy's 5th Floor - Herald Square - New York NY  (W)\"\n"
+"      ],\n"
+"      [\n"
+"        \"80th & York - New York NY  (W)\"\n"
+"      ],\n"
+"      [\n"
+"        \"Columbus @ 67th - New York NY  (W)\"\n"
+"      ],\n"
+"      [\n"
+"        \"45th & Broadway - New York NY  (W)\"\n"
+"      ],\n"
+"      [\n"
+"        \"Marriott Marquis - Lobby - New York NY\"\n"
+"      ],\n"
+"      [\n"
+"        \"Second @ 81st - New York NY  (W)\"\n"
+"      ],\n"
+"      [\n"
+"        \"52nd & Seventh - New York NY  (W)\"\n"
+"      ]\n"
+"    ]\n"
+"  ]\n"
+"]\n"
+"~~~"
+msgstr ""
+
+msgid "### Unite two Droonga clusters"
+msgstr ""
+
+msgid "Run following command lines to unite these two clusters:"
+msgstr ""
+
+msgid ""
+"~~~\n"
+"(on node0)\n"
+"# droonga-engine-catalog-modify --add-replica-hosts=node1\n"
+"~~~"
+msgstr ""
+
+msgid ""
+"~~~\n"
+"(on node1)\n"
+"# droonga-engine-catalog-modify --add-replica-hosts=node0\n"
+"~~~"
+msgstr ""
+
+msgid "After that there is just one cluster - yes, it's the initial state."
+msgstr ""
+
+msgid ""
+"~~~\n"
+"$ curl \"http://node0:10041/droonga/system/status\" | jq \".\"\n"
+"{\n"
+"  \"nodes\": {\n"
+"    \"node0:10031/droonga\": {\n"
+"      \"live\": true\n"
+"    },\n"
+"    \"node1:10031/droonga\": {\n"
+"      \"live\": true\n"
+"    }\n"
+"  }\n"
+"}\n"
+"~~~"
+msgstr ""
+
+msgid "## Conclusion"
+msgstr ""
+
+msgid ""
+"In this tutorial, you did backup a [Droonga][] cluster and restore the data.\n"
+"Moreover, you did duplicate contents of an existing Droogna cluster to another"
+" empty cluster."
+msgstr ""
+
+msgid ""
+"Next, let's learn [how to add a new replica to an existing Droonga cluster](.."
+"/add-replica/)."
+msgstr ""
+
+msgid ""
+"  [Ubuntu]: http://www.ubuntu.com/\n"
+"  [Droonga]: https://droonga.org/\n"
+"  [Groonga]: http://groonga.org/\n"
+"  [command reference]: ../../reference/commands/"
+msgstr ""

  Added: _po/ja/tutorial/1.1.0/groonga/index.po (+1225 -0) 100644
===================================================================
--- /dev/null
+++ _po/ja/tutorial/1.1.0/groonga/index.po    2014-11-30 23:20:40 +0900 (19da433)
@@ -0,0 +1,1225 @@
+msgid ""
+msgstr ""
+"Project-Id-Version: PACKAGE VERSION\n"
+"PO-Revision-Date: 2014-11-30 23:19+0900\n"
+"Language: ja\n"
+"MIME-Version: 1.0\n"
+"Content-Type: text/plain; charset=UTF-8\n"
+"Content-Transfer-Encoding: 8bit\n"
+"Plural-Forms: nplurals=1; plural=0;\n"
+
+msgid ""
+"---\n"
+"title: \"Droonga tutorial: Getting started/How to migrate from Groonga?\"\n"
+"layout: en\n"
+"---"
+msgstr ""
+
+msgid ""
+"* TOC\n"
+"{:toc}"
+msgstr ""
+
+msgid "## The goal of this tutorial"
+msgstr ""
+
+msgid ""
+"Learning steps to run a Droonga cluster by your hand, and use it as a [Groonga"
+"][groonga] compatible server."
+msgstr ""
+
+msgid "## Precondition"
+msgstr ""
+
+msgid ""
+"* You must have basic knowledge and experiences to set up and operate an [Ubun"
+"tu][] or [CentOS][] Server.\n"
+"* You must have basic knowledge and experiences to use the [Groonga][groonga] "
+"via HTTP."
+msgstr ""
+
+msgid "## What's Droonga?"
+msgstr ""
+
+msgid ""
+"It is a data processing engine based on a distributed architecture, named afte"
+"r the terms \"distributed-Groonga\".\n"
+"As its name suggests, it can work as a Groonga compatible server with some imp"
+"rovements - replication and sharding."
+msgstr ""
+
+msgid ""
+"In a certain sense, the Droonga is quite different from Groonga, about its arc"
+"hitecture, design, API etc.\n"
+"However, you don't have to understand the whole architecture of the Droonga, i"
+"f you simply use it just as a Groonga compatible server."
+msgstr ""
+
+msgid ""
+"For example, let's try to build a database system to find [Starbucks stores in"
+" New York](http://geocommons.com/overlays/430038)."
+msgstr ""
+
+msgid "## Set up a Droonga cluster"
+msgstr ""
+
+msgid ""
+"A database system based on the Droonga is called *Droonga cluster*.\n"
+"This section describes how to set up a Droonga cluster from scratch."
+msgstr ""
+
+msgid "### Prepare computers for Droonga nodes"
+msgstr ""
+
+msgid ""
+"A Droonga cluster is constructed from one or more computers, called *Droonga n"
+"ode*(s).\n"
+"Prepare computers for Droonga nodes at first."
+msgstr ""
+
+msgid ""
+"This tutorial describes steps to set up Droonga cluster based on existing comp"
+"uters.\n"
+"Following instructions are basically written for a successfully prepared virtu"
+"al machine of the `Ubuntu 14.04 x64` or `CentOS 7 x64` on the service [Digital"
+"Ocean](https://www.digitalocean.com/), with an available console."
+msgstr ""
+
+msgid ""
+"If you just want to try Droong casually, see another tutorial: [how to prepare"
+" multiple virtual machines on your own computer](../virtual-machines-for-exper"
+"iments/)."
+msgstr ""
+
+msgid "NOTE:"
+msgstr ""
+
+msgid ""
+" * Make sure to use instances with >= 2GB memory equipped, at least during ins"
+"tallation of required packages for Droonga.\n"
+"   Otherwise, you possibly experience a strange build error.\n"
+" * Make sure the hostname reported by `hostname -f` or the IP address reported"
+" by `hostname -i` is accessible from each other computer in your cluster.\n"
+" * Make sure that commands `curl` and `jq` are installed in your computers.\n"
+"   `curl` is required to download installation scripts.\n"
+"   `jq` is not required for installation, but it will help you to read respons"
+"e JSONs returned from Droonga."
+msgstr ""
+
+msgid ""
+"You need to prepare two or more nodes for effective replication.\n"
+"So this tutorial assumes that you have two computers:"
+msgstr ""
+
+msgid ""
+" * has an IP address `192.168.100.50`, with a host name `node0`.\n"
+" * has an IP address `192.168.100.51`, with a host name `node1`."
+msgstr ""
+
+msgid "### Set up computers as Droonga nodes"
+msgstr ""
+
+msgid ""
+"Groonga provides binary packages and you can install Groonga easily, for some "
+"environments.\n"
+"(See: [how to install Groonga](http://groonga.org/docs/install.html))"
+msgstr ""
+
+msgid "On the other hand, steps to set up a computer as a Droonga node are:"
+msgstr ""
+
+msgid ""
+" 1. Install the `droonga-engine`.\n"
+" 2. Install the `droonga-http-server`.\n"
+" 3. Configure the node to work together with other nodes."
+msgstr ""
+
+msgid ""
+"Note that you must do all steps on each computer.\n"
+"However, they're very simple."
+msgstr ""
+
+msgid ""
+"Let's log in to the computer `node0` (`192.168.100.50`), and install Droonga c"
+"omponents."
+msgstr ""
+
+msgid ""
+"First, install the `droonga-engine`.\n"
+"It is the core component provides most features of Droonga system.\n"
+"Download the installation script and run it by `bash` as the root user:"
+msgstr ""
+
+msgid ""
+"~~~\n"
+"# curl https://raw.githubusercontent.com/droonga/droonga-engine/master/install"
+".sh | \\\n"
+"    bash\n"
+"...\n"
+"Installing droonga-engine from RubyGems...\n"
+"...\n"
+"Preparing the user...\n"
+"...\n"
+"Setting up the configuration directory...\n"
+"This node is configured with a hostname XXXXXXXX."
+msgstr ""
+
+msgid ""
+"Registering droonga-engine as a service...\n"
+"...\n"
+"Successfully installed droonga-engine.\n"
+"~~~"
+msgstr ""
+
+msgid ""
+"Note, The name of the node itself (guessed from the host name of the computer)"
+" appears in the message.\n"
+"*It is used in various situations*, so *don't forget what is the name of each "
+"node*."
+msgstr ""
+
+msgid ""
+"Second, install the `droonga-http-server`.\n"
+"It is the frontend component required to translate HTTP requests to Droonga's "
+"native one.\n"
+"Download the installation script and run it by `bash` as the root user:"
+msgstr ""
+
+msgid ""
+"~~~\n"
+"# curl https://raw.githubusercontent.com/droonga/droonga-http-server/master/in"
+"stall.sh | \\\n"
+"    bash\n"
+"...\n"
+"Installing droonga-http-server from npmjs.org...\n"
+"...\n"
+"Preparing the user...\n"
+"...\n"
+"Setting up the configuration directory...\n"
+"The droonga-engine service is detected on this node.\n"
+"The droonga-http-server is configured to be connected\n"
+"to this node (XXXXXXXX).\n"
+"This node is configured with a hostname XXXXXXXX."
+msgstr ""
+
+msgid ""
+"Registering droonga-http-server as a service...\n"
+"...\n"
+"Successfully installed droonga-http-server.\n"
+"~~~"
+msgstr ""
+
+msgid ""
+"After that, do same operations on another computer `node1` (`192.168.100.51`) "
+"also.\n"
+"Then two computers successfully prepared to work as Droonga nodes."
+msgstr ""
+
+msgid ""
+"### When your computers don't have a host name accessible from other computers"
+"... {#accessible-host-name}"
+msgstr ""
+
+msgid ""
+"Each Droonga node must know the *accessible host name* of the node itself, to "
+"communicate with other nodes."
+msgstr ""
+
+msgid ""
+"The installation script guesses accessible host name of the node automatically"
+".\n"
+"You can confirm what value is detected as the host name of the node itself, by"
+" following command:"
+msgstr ""
+
+msgid ""
+"~~~\n"
+"# cat ~droonga-engine/droonga/droonga-engine.yaml | grep host\n"
+"host: XXXXXXXX\n"
+"~~~"
+msgstr ""
+
+msgid ""
+"However it may be misdetected if the computer is not configured properly.\n"
+"For example, even if a node is configured with a host name `node0`, it cannot "
+"receive any message from other nodes when others cannot resolve the name `node"
+"0` to actual IP address."
+msgstr ""
+
+msgid ""
+"Then you have to reconfigure your node with raw IP addresse of the node itself"
+", like:"
+msgstr ""
+
+msgid ""
+"~~~\n"
+"(on node0=192.168.100.50)\n"
+"# host=192.168.100.50\n"
+"# droonga-engine-configure --quiet --reset-config --reset-catalog \\\n"
+"                           --host=$host\n"
+"# droonga-http-server-configure --quiet --reset-config \\\n"
+"                                --droonga-engine-host-name=$host \\\n"
+"                                --receive-host-name=$host"
+msgstr ""
+
+msgid ""
+"(on node1=192.168.100.51)\n"
+"# host=192.168.100.51\n"
+"...\n"
+"~~~"
+msgstr ""
+
+msgid ""
+"Then your computer `node0` is configured as a Droonga node with the host name "
+"`192.168.100.50`, and `node1` becomes a node with the name `192.168.100.51`.\n"
+"As said before, *the configured name is used in various situations*, so *don't"
+" forget what is the name of each node*."
+msgstr ""
+
+msgid ""
+"This tutorial assumes that all your computers can resolve each other host name"
+" `node0` and `node1` correctly.\n"
+"Otherwise, read host names `node0` and `node1` in following instructions, as r"
+"aw IP addresses like `192.168.100.50` and `192.168.100.51`."
+msgstr ""
+
+msgid ""
+"By the way, you can specify your favorite value as the host name of the comput"
+"er itself via environment variables, for the installation script, like:"
+msgstr ""
+
+msgid ""
+"~~~\n"
+"(on node0=192.168.100.50)\n"
+"# host=192.168.100.50\n"
+"# curl https://raw.githubusercontent.com/droonga/droonga-engine/master/install"
+".sh | \\\n"
+"    HOST=$host bash\n"
+"# curl https://raw.githubusercontent.com/droonga/droonga-http-server/master/in"
+"stall.sh | \\\n"
+"    ENGINE_HOST=$host HOST=$host bash"
+msgstr ""
+
+msgid ""
+"This option will help you, if you already know that your computers are not con"
+"figured to resolve each other name."
+msgstr ""
+
+msgid "### Configure nodes to work together as a cluster"
+msgstr ""
+
+msgid ""
+"Currently, these nodes are still individual nodes.\n"
+"Let's configure them to work together as a cluster."
+msgstr ""
+
+msgid "Run commands like this, on each node:"
+msgstr ""
+
+msgid ""
+"~~~\n"
+"# droonga-engine-catalog-generate --hosts=node0,node1\n"
+"~~~"
+msgstr ""
+
+msgid ""
+"Of course you must specify correct host name of nodes by the `--hosts` option."
+"\n"
+"If your nodes are configured with raw IP addresses, the command line is:"
+msgstr ""
+
+msgid ""
+"~~~\n"
+"# droonga-engine-catalog-generate --hosts=192.168.100.50,192.168.100.51\n"
+"~~~"
+msgstr ""
+
+msgid ""
+"OK, now your Droonga cluster is correctly prepared.\n"
+"Two nodes are configured to work together as a Droonga cluster."
+msgstr ""
+
+msgid "Let's continue to [the next step, \"how to use the cluster\"](#use)."
+msgstr ""
+
+msgid "## Use the Droonga cluster, via HTTP {#use}"
+msgstr ""
+
+msgid "### Start and stop services on each Droonga node"
+msgstr ""
+
+msgid "You can run Groonga as an HTTP server daemon with the option `-d`, like:"
+msgstr ""
+
+msgid ""
+"~~~\n"
+"# groonga -p 10041 -d --protocol http /tmp/databases/db\n"
+"~~~"
+msgstr ""
+
+msgid ""
+"On the other hand, you have to run multiple server daemons for each Droonga no"
+"de to use your Droonga cluster via HTTP."
+msgstr ""
+
+msgid ""
+"If you set up your Droonga nodes by installation scripts, daemons are already "
+"been configured as system services managed via the `service` command.\n"
+"To start them, run commands like following on each Droonga node:"
+msgstr ""
+
+msgid ""
+"~~~\n"
+"# service droonga-engine start\n"
+"# service droonga-http-server start\n"
+"~~~"
+msgstr ""
+
+msgid ""
+"By these commands, services start to work.\n"
+"Now two nodes construct a cluster and they monitor each other.\n"
+"If one of nodes dies and there is any still alive node, survivor(s) will work "
+"as the Droonga cluster.\n"
+"Then you can recover the dead node and re-join it to the cluster secretly."
+msgstr ""
+
+msgid ""
+"Let's make sure that the cluster works, by a Droonga command, `system.status`."
+"\n"
+"You can see the result via HTTP, like:"
+msgstr ""
+
+msgid ""
+"~~~\n"
+"$ curl \"http://node0:10041/droonga/system/status\" | jq \".\"\n"
+"{\n"
+"  \"nodes\": {\n"
+"    \"node0:10031/droonga\": {\n"
+"      \"live\": true\n"
+"    },\n"
+"    \"node1:10031/droonga\": {\n"
+"      \"live\": true\n"
+"    }\n"
+"  }\n"
+"}\n"
+"~~~"
+msgstr ""
+
+msgid ""
+"The result says that two nodes are working correctly.\n"
+"Because it is a cluster, another endpoint returns same result."
+msgstr ""
+
+msgid ""
+"~~~\n"
+"$ curl \"http://node1:10041/droonga/system/status\" | jq \".\"\n"
+"{\n"
+"  \"nodes\": {\n"
+"    \"node0:10031/droonga\": {\n"
+"      \"live\": true\n"
+"    },\n"
+"    \"node1:10031/droonga\": {\n"
+"      \"live\": true\n"
+"    }\n"
+"  }\n"
+"}\n"
+"~~~"
+msgstr ""
+
+msgid ""
+"`droonga-http-server` connects to all `droonga-engine` in the cluster, and dis"
+"tributes requests to them like a load balancer.\n"
+"Moreover, even if some `droonga-engine`s stop, `droonga-http-server` wards off"
+" those dead engines automatically, and the cluster keeps itself correctly work"
+"ing."
+msgstr ""
+
+msgid "To stop services, run commands like following on each Droonga node:"
+msgstr ""
+
+msgid ""
+"~~~\n"
+"# service droonga-engine stop\n"
+"# service droonga-http-server stop\n"
+"~~~"
+msgstr ""
+
+msgid "After verification, start services again, on each Droonga node."
+msgstr ""
+
+msgid "### Create a table, columns, and indexes"
+msgstr ""
+
+msgid ""
+"Now your Droonga cluster actually works as an HTTP server compatible to Groong"
+"a's HTTP server."
+msgstr ""
+
+msgid ""
+"Requests are completely same to ones for a Groonga server.\n"
+"To create a new table `Store`, you just have to send a GET request for the `ta"
+"ble_create` command, like:"
+msgstr ""
+
+msgid ""
+"~~~\n"
+"$ endpoint=\"http://node0:10041\"\n"
+"$ curl \"$endpoint/d/table_create?name=Store&flags=TABLE_PAT_KEY&key_type=Short"
+"Text\" | jq \".\"\n"
+"[\n"
+"  [\n"
+"    0,\n"
+"    1401358896.360356,\n"
+"    0.0035653114318847656\n"
+"  ],\n"
+"  true\n"
+"]\n"
+"~~~"
+msgstr ""
+
+msgid ""
+"Note that you have to specify the host, one of Droonga nodes with active droon"
+"ga-http-server, in your Droonga cluster.\n"
+"In other words, you can use any favorite node in the cluster as an endpoint.\n"
+"All requests will be distributed to suitable nodes in the cluster."
+msgstr ""
+
+msgid ""
+"OK, now the table has been created successfully.\n"
+"Let's see it by the `table_list` command:"
+msgstr ""
+
+msgid ""
+"~~~\n"
+"$ curl \"$endpoint/d/table_list\" | jq \".\"\n"
+"[\n"
+"  [\n"
+"    0,\n"
+"    1401358908.9126804,\n"
+"    0.001600027084350586\n"
+"  ],\n"
+"  [\n"
+"    [\n"
+"      [\n"
+"        \"id\",\n"
+"        \"UInt32\"\n"
+"      ],\n"
+"      [\n"
+"        \"name\",\n"
+"        \"ShortText\"\n"
+"      ],\n"
+"      [\n"
+"        \"path\",\n"
+"        \"ShortText\"\n"
+"      ],\n"
+"      [\n"
+"        \"flags\",\n"
+"        \"ShortText\"\n"
+"      ],\n"
+"      [\n"
+"        \"domain\",\n"
+"        \"ShortText\"\n"
+"      ],\n"
+"      [\n"
+"        \"range\",\n"
+"        \"ShortText\"\n"
+"      ],\n"
+"      [\n"
+"        \"default_tokenizer\",\n"
+"        \"ShortText\"\n"
+"      ],\n"
+"      [\n"
+"        \"normalizer\",\n"
+"        \"ShortText\"\n"
+"      ]\n"
+"    ],\n"
+"    [\n"
+"      256,\n"
+"      \"Store\",\n"
+"      \"/home/vagrant/droonga/000/db.0000100\",\n"
+"      \"TABLE_PAT_KEY|PERSISTENT\",\n"
+"      \"ShortText\",\n"
+"      null,\n"
+"      null,\n"
+"      null\n"
+"    ]\n"
+"  ]\n"
+"]\n"
+"~~~"
+msgstr ""
+
+msgid "Because it is a cluster, another endpoint returns same result."
+msgstr ""
+
+msgid ""
+"~~~\n"
+"$ curl \"http://node1:10041/d/table_list\" | jq \".\"\n"
+"[\n"
+"  [\n"
+"    0,\n"
+"    1401358908.9126804,\n"
+"    0.001600027084350586\n"
+"  ],\n"
+"  [\n"
+"    [\n"
+"      [\n"
+"        \"id\",\n"
+"        \"UInt32\"\n"
+"      ],\n"
+"      [\n"
+"        \"name\",\n"
+"        \"ShortText\"\n"
+"      ],\n"
+"      [\n"
+"        \"path\",\n"
+"        \"ShortText\"\n"
+"      ],\n"
+"      [\n"
+"        \"flags\",\n"
+"        \"ShortText\"\n"
+"      ],\n"
+"      [\n"
+"        \"domain\",\n"
+"        \"ShortText\"\n"
+"      ],\n"
+"      [\n"
+"        \"range\",\n"
+"        \"ShortText\"\n"
+"      ],\n"
+"      [\n"
+"        \"default_tokenizer\",\n"
+"        \"ShortText\"\n"
+"      ],\n"
+"      [\n"
+"        \"normalizer\",\n"
+"        \"ShortText\"\n"
+"      ]\n"
+"    ],\n"
+"    [\n"
+"      256,\n"
+"      \"Store\",\n"
+"      \"/home/vagrant/droonga/000/db.0000100\",\n"
+"      \"TABLE_PAT_KEY|PERSISTENT\",\n"
+"      \"ShortText\",\n"
+"      null,\n"
+"      null,\n"
+"      null\n"
+"    ]\n"
+"  ]\n"
+"]\n"
+"~~~"
+msgstr ""
+
+msgid ""
+"Next, create new columns `name` and `location` to the `Store` table by the `co"
+"lumn_create` command, like:"
+msgstr ""
+
+msgid ""
+"~~~\n"
+"$ curl \"$endpoint/d/column_create?table=Store&name=name&flags=COLUMN_SCALAR&ty"
+"pe=ShortText\" | jq \".\"\n"
+"[\n"
+"  [\n"
+"    0,\n"
+"    1401358348.6541538,\n"
+"    0.0004096031188964844\n"
+"  ],\n"
+"  true\n"
+"]\n"
+"$ curl \"$endpoint/d/column_create?table=Store&name=location&flags=COLUMN_SCALA"
+"R&type=WGS84GeoPoint\" | jq \".\"\n"
+"[\n"
+"  [\n"
+"    0,\n"
+"    1401358359.084659,\n"
+"    0.002511262893676758\n"
+"  ],\n"
+"  true\n"
+"]\n"
+"~~~"
+msgstr ""
+
+msgid "Create indexes also."
+msgstr ""
+
+msgid ""
+"~~~\n"
+"$ curl \"$endpoint/d/table_create?name=Term&flags=TABLE_PAT_KEY&key_type=ShortT"
+"ext&default_tokenizer=TokenBigram&normalizer=NormalizerAuto\" | jq \".\"\n"
+"[\n"
+"  [\n"
+"    0,\n"
+"    1401358475.7229664,\n"
+"    0.002419710159301758\n"
+"  ],\n"
+"  true\n"
+"]\n"
+"$ curl \"$endpoint/d/column_create?table=Term&name=store_name&flags=COLUMN_INDE"
+"X|WITH_POSITION&type=Store&source=name\" | jq \".\"\n"
+"[\n"
+"  [\n"
+"    0,\n"
+"    1401358494.1656318,\n"
+"    0.006799221038818359\n"
+"  ],\n"
+"  true\n"
+"]\n"
+"$ curl \"$endpoint/d/table_create?name=Location&flags=TABLE_PAT_KEY&key_type=WG"
+"S84GeoPoint\" | jq \".\"\n"
+"[\n"
+"  [\n"
+"    0,\n"
+"    1401358505.708896,\n"
+"    0.0016951560974121094\n"
+"  ],\n"
+"  true\n"
+"]\n"
+"$ curl \"$endpoint/d/column_create?table=Location&name=store&flags=COLUMN_INDEX"
+"&type=Store&source=location\" | jq \".\"\n"
+"[\n"
+"  [\n"
+"    0,\n"
+"    1401358519.6187897,\n"
+"    0.024788379669189453\n"
+"  ],\n"
+"  true\n"
+"]\n"
+"~~~"
+msgstr ""
+
+msgid "Let's confirm results:"
+msgstr ""
+
+msgid ""
+"~~~\n"
+"$ curl \"$endpoint/d/table_list\" | jq \".\"\n"
+"[\n"
+"  [\n"
+"    0,\n"
+"    1416390011.7194495,\n"
+"    0.0015704631805419922\n"
+"  ],\n"
+"  [\n"
+"    [\n"
+"      [\n"
+"        \"id\",\n"
+"        \"UInt32\"\n"
+"      ],\n"
+"      [\n"
+"        \"name\",\n"
+"        \"ShortText\"\n"
+"      ],\n"
+"      [\n"
+"        \"path\",\n"
+"        \"ShortText\"\n"
+"      ],\n"
+"      [\n"
+"        \"flags\",\n"
+"        \"ShortText\"\n"
+"      ],\n"
+"      [\n"
+"        \"domain\",\n"
+"        \"ShortText\"\n"
+"      ],\n"
+"      [\n"
+"        \"range\",\n"
+"        \"ShortText\"\n"
+"      ],\n"
+"      [\n"
+"        \"default_tokenizer\",\n"
+"        \"ShortText\"\n"
+"      ],\n"
+"      [\n"
+"        \"normalizer\",\n"
+"        \"ShortText\"\n"
+"      ]\n"
+"    ],\n"
+"    [\n"
+"      261,\n"
+"      \"Location\",\n"
+"      \"/home/droonga-engine/droonga/databases/000/db.0000105\",\n"
+"      \"TABLE_PAT_KEY|PERSISTENT\",\n"
+"      \"WGS84GeoPoint\",\n"
+"      null,\n"
+"      null,\n"
+"      null\n"
+"    ],\n"
+"    [\n"
+"      256,\n"
+"      \"Store\",\n"
+"      \"/home/droonga-engine/droonga/databases/000/db.0000100\",\n"
+"      \"TABLE_PAT_KEY|PERSISTENT\",\n"
+"      \"ShortText\",\n"
+"      null,\n"
+"      null,\n"
+"      null\n"
+"    ],\n"
+"    [\n"
+"      259,\n"
+"      \"Term\",\n"
+"      \"/home/droonga-engine/droonga/databases/000/db.0000103\",\n"
+"      \"TABLE_PAT_KEY|PERSISTENT\",\n"
+"      \"ShortText\",\n"
+"      null,\n"
+"      \"TokenBigram\",\n"
+"      \"NormalizerAuto\"\n"
+"    ]\n"
+"  ]\n"
+"]\n"
+"$ curl \"$endpoint/d/column_list?table=Store\" | jq \".\"\n"
+"[\n"
+"  [\n"
+"    0,\n"
+"    1416390069.515929,\n"
+"    0.001077413558959961\n"
+"  ],\n"
+"  [\n"
+"    [\n"
+"      [\n"
+"        \"id\",\n"
+"        \"UInt32\"\n"
+"      ],\n"
+"      [\n"
+"        \"name\",\n"
+"        \"ShortText\"\n"
+"      ],\n"
+"      [\n"
+"        \"path\",\n"
+"        \"ShortText\"\n"
+"      ],\n"
+"      [\n"
+"        \"type\",\n"
+"        \"ShortText\"\n"
+"      ],\n"
+"      [\n"
+"        \"flags\",\n"
+"        \"ShortText\"\n"
+"      ],\n"
+"      [\n"
+"        \"domain\",\n"
+"        \"ShortText\"\n"
+"      ],\n"
+"      [\n"
+"        \"range\",\n"
+"        \"ShortText\"\n"
+"      ],\n"
+"      [\n"
+"        \"source\",\n"
+"        \"ShortText\"\n"
+"      ]\n"
+"    ],\n"
+"    [\n"
+"      256,\n"
+"      \"_key\",\n"
+"      \"\",\n"
+"      \"\",\n"
+"      \"COLUMN_SCALAR\",\n"
+"      \"Store\",\n"
+"      \"ShortText\",\n"
+"      []\n"
+"    ],\n"
+"    [\n"
+"      258,\n"
+"      \"location\",\n"
+"      \"/home/droonga-engine/droonga/databases/000/db.0000102\",\n"
+"      \"fix\",\n"
+"      \"COLUMN_SCALAR\",\n"
+"      \"Store\",\n"
+"      \"WGS84GeoPoint\",\n"
+"      []\n"
+"    ],\n"
+"    [\n"
+"      257,\n"
+"      \"name\",\n"
+"      \"/home/droonga-engine/droonga/databases/000/db.0000101\",\n"
+"      \"var\",\n"
+"      \"COLUMN_SCALAR\",\n"
+"      \"Store\",\n"
+"      \"ShortText\",\n"
+"      []\n"
+"    ]\n"
+"  ]\n"
+"]\n"
+"$ curl \"$endpoint/d/column_list?table=Term\" | jq \".\"\n"
+"[\n"
+"  [\n"
+"    0,\n"
+"    1416390110.143951,\n"
+"    0.0013172626495361328\n"
+"  ],\n"
+"  [\n"
+"    [\n"
+"      [\n"
+"        \"id\",\n"
+"        \"UInt32\"\n"
+"      ],\n"
+"      [\n"
+"        \"name\",\n"
+"        \"ShortText\"\n"
+"      ],\n"
+"      [\n"
+"        \"path\",\n"
+"        \"ShortText\"\n"
+"      ],\n"
+"      [\n"
+"        \"type\",\n"
+"        \"ShortText\"\n"
+"      ],\n"
+"      [\n"
+"        \"flags\",\n"
+"        \"ShortText\"\n"
+"      ],\n"
+"      [\n"
+"        \"domain\",\n"
+"        \"ShortText\"\n"
+"      ],\n"
+"      [\n"
+"        \"range\",\n"
+"        \"ShortText\"\n"
+"      ],\n"
+"      [\n"
+"        \"source\",\n"
+"        \"ShortText\"\n"
+"      ]\n"
+"    ],\n"
+"    [\n"
+"      259,\n"
+"      \"_key\",\n"
+"      \"\",\n"
+"      \"\",\n"
+"      \"COLUMN_SCALAR\",\n"
+"      \"Term\",\n"
+"      \"ShortText\",\n"
+"      []\n"
+"    ],\n"
+"    [\n"
+"      260,\n"
+"      \"store_name\",\n"
+"      \"/home/droonga-engine/droonga/databases/000/db.0000104\",\n"
+"      \"index\",\n"
+"      \"COLUMN_INDEX|WITH_POSITION\",\n"
+"      \"Term\",\n"
+"      \"Store\",\n"
+"      [\n"
+"        \"name\"\n"
+"      ]\n"
+"    ]\n"
+"  ]\n"
+"]\n"
+"$ curl \"$endpoint/d/column_list?table=Location\" | jq \".\"\n"
+"[\n"
+"  [\n"
+"    0,\n"
+"    1416390163.0140722,\n"
+"    0.0009713172912597656\n"
+"  ],\n"
+"  [\n"
+"    [\n"
+"      [\n"
+"        \"id\",\n"
+"        \"UInt32\"\n"
+"      ],\n"
+"      [\n"
+"        \"name\",\n"
+"        \"ShortText\"\n"
+"      ],\n"
+"      [\n"
+"        \"path\",\n"
+"        \"ShortText\"\n"
+"      ],\n"
+"      [\n"
+"        \"type\",\n"
+"        \"ShortText\"\n"
+"      ],\n"
+"      [\n"
+"        \"flags\",\n"
+"        \"ShortText\"\n"
+"      ],\n"
+"      [\n"
+"        \"domain\",\n"
+"        \"ShortText\"\n"
+"      ],\n"
+"      [\n"
+"        \"range\",\n"
+"        \"ShortText\"\n"
+"      ],\n"
+"      [\n"
+"        \"source\",\n"
+"        \"ShortText\"\n"
+"      ]\n"
+"    ],\n"
+"    [\n"
+"      261,\n"
+"      \"_key\",\n"
+"      \"\",\n"
+"      \"\",\n"
+"      \"COLUMN_SCALAR\",\n"
+"      \"Location\",\n"
+"      \"WGS84GeoPoint\",\n"
+"      []\n"
+"    ],\n"
+"    [\n"
+"      262,\n"
+"      \"store\",\n"
+"      \"/home/droonga-engine/droonga/databases/000/db.0000106\",\n"
+"      \"index\",\n"
+"      \"COLUMN_INDEX\",\n"
+"      \"Location\",\n"
+"      \"Store\",\n"
+"      [\n"
+"        \"location\"\n"
+"      ]\n"
+"    ]\n"
+"  ]\n"
+"]\n"
+"~~~"
+msgstr ""
+
+msgid "### Load data to a table"
+msgstr ""
+
+msgid ""
+"Let's load data to the `Store` table.\n"
+"First, prepare the data as a JSON file `stores.json`."
+msgstr ""
+
+msgid "stores.json:"
+msgstr ""
+
+msgid ""
+"~~~\n"
+"[\n"
+"[\"_key\",\"name\",\"location\"],\n"
+"[\"store0\",\"1st Avenue & 75th St. - New York NY  (W)\",\"40.770262,-73.954798\"],\n"
+"[\"store1\",\"76th & Second - New York NY  (W)\",\"40.771056,-73.956757\"],\n"
+"[\"store2\",\"2nd Ave. & 9th Street - New York NY\",\"40.729445,-73.987471\"],\n"
+"[\"store3\",\"15th & Third - New York NY  (W)\",\"40.733946,-73.9867\"],\n"
+"[\"store4\",\"41st and Broadway - New York NY  (W)\",\"40.755111,-73.986225\"],\n"
+"[\"store5\",\"84th & Third Ave - New York NY  (W)\",\"40.777485,-73.954979\"],\n"
+"[\"store6\",\"150 E. 42nd Street - New York NY  (W)\",\"40.750784,-73.975582\"],\n"
+"[\"store7\",\"West 43rd and Broadway - New York NY  (W)\",\"40.756197,-73.985624\"],"
+"\n"
+"[\"store8\",\"Macy's 35th Street Balcony - New York NY\",\"40.750703,-73.989787\"],\n"
+"[\"store9\",\"Macy's 6th Floor - Herald Square - New York NY  (W)\",\"40.750703,-73"
+".989787\"],\n"
+"[\"store10\",\"Herald Square- Macy's - New York NY\",\"40.750703,-73.989787\"],\n"
+"[\"store11\",\"Macy's 5th Floor - Herald Square - New York NY  (W)\",\"40.750703,-7"
+"3.989787\"],\n"
+"[\"store12\",\"80th & York - New York NY  (W)\",\"40.772204,-73.949862\"],\n"
+"[\"store13\",\"Columbus @ 67th - New York NY  (W)\",\"40.774009,-73.981472\"],\n"
+"[\"store14\",\"45th & Broadway - New York NY  (W)\",\"40.75766,-73.985719\"],\n"
+"[\"store15\",\"Marriott Marquis - Lobby - New York NY\",\"40.759123,-73.984927\"],\n"
+"[\"store16\",\"Second @ 81st - New York NY  (W)\",\"40.77466,-73.954447\"],\n"
+"[\"store17\",\"52nd & Seventh - New York NY  (W)\",\"40.761829,-73.981141\"],\n"
+"[\"store18\",\"1585 Broadway (47th) - New York NY  (W)\",\"40.759806,-73.985066\"],\n"
+"[\"store19\",\"85th & First - New York NY  (W)\",\"40.776101,-73.949971\"],\n"
+"[\"store20\",\"92nd & 3rd - New York NY  (W)\",\"40.782606,-73.951235\"],\n"
+"[\"store21\",\"165 Broadway - 1 Liberty - New York NY  (W)\",\"40.709727,-74.011395"
+"\"],\n"
+"[\"store22\",\"1656 Broadway - New York NY  (W)\",\"40.762434,-73.983364\"],\n"
+"[\"store23\",\"54th & Broadway - New York NY  (W)\",\"40.764275,-73.982361\"],\n"
+"[\"store24\",\"Limited Brands-NYC - New York NY\",\"40.765219,-73.982025\"],\n"
+"[\"store25\",\"19th & 8th - New York NY  (W)\",\"40.743218,-74.000605\"],\n"
+"[\"store26\",\"60th & Broadway-II - New York NY  (W)\",\"40.769196,-73.982576\"],\n"
+"[\"store27\",\"63rd & Broadway - New York NY  (W)\",\"40.771376,-73.982709\"],\n"
+"[\"store28\",\"195 Broadway - New York NY  (W)\",\"40.710703,-74.009485\"],\n"
+"[\"store29\",\"2 Broadway - New York NY  (W)\",\"40.704538,-74.01324\"],\n"
+"[\"store30\",\"2 Columbus Ave. - New York NY  (W)\",\"40.769262,-73.984764\"],\n"
+"[\"store31\",\"NY Plaza - New York NY  (W)\",\"40.702802,-74.012784\"],\n"
+"[\"store32\",\"36th and Madison - New York NY  (W)\",\"40.748917,-73.982683\"],\n"
+"[\"store33\",\"125th St. btwn Adam Clayton & FDB - New York NY\",\"40.808952,-73.94"
+"8229\"],\n"
+"[\"store34\",\"70th & Broadway - New York NY  (W)\",\"40.777463,-73.982237\"],\n"
+"[\"store35\",\"2138 Broadway - New York NY  (W)\",\"40.781078,-73.981167\"],\n"
+"[\"store36\",\"118th & Frederick Douglas Blvd. - New York NY  (W)\",\"40.806176,-73"
+".954109\"],\n"
+"[\"store37\",\"42nd & Second - New York NY  (W)\",\"40.750069,-73.973393\"],\n"
+"[\"store38\",\"Broadway @ 81st - New York NY  (W)\",\"40.784972,-73.978987\"],\n"
+"[\"store39\",\"Fashion Inst of Technology - New York NY\",\"40.746948,-73.994557\"]\n"
+"]\n"
+"~~~"
+msgstr ""
+
+msgid "Then, send it as a POST request of the `load` command, like:"
+msgstr ""
+
+msgid ""
+"~~~\n"
+"$ curl --data \"@stores.json\" \"$endpoint/d/load?table=Store\" | jq \".\"\n"
+"[\n"
+"  [\n"
+"    0,\n"
+"    1401358564.909,\n"
+"    0.158\n"
+"  ],\n"
+"  [\n"
+"    40\n"
+"  ]\n"
+"]\n"
+"~~~"
+msgstr ""
+
+msgid "Now all data in the JSON file are successfully loaded."
+msgstr ""
+
+msgid "### Select data from a table"
+msgstr ""
+
+msgid "OK, all data is now ready."
+msgstr ""
+
+msgid "As the starter, let's select initial ten records with the `select` command:"
+msgstr ""
+
+msgid ""
+"~~~\n"
+"$ curl \"$endpoint/d/select?table=Store&output_columns=name&limit=10\" | jq \".\"\n"
+"[\n"
+"  [\n"
+"    0,\n"
+"    1401362059.7437818,\n"
+"    4.935264587402344e-05\n"
+"  ],\n"
+"  [\n"
+"    [\n"
+"      [\n"
+"        40\n"
+"      ],\n"
+"      [\n"
+"        [\n"
+"          \"name\",\n"
+"          \"ShortText\"\n"
+"        ]\n"
+"      ],\n"
+"      [\n"
+"        \"1st Avenue & 75th St. - New York NY  (W)\"\n"
+"      ],\n"
+"      [\n"
+"        \"76th & Second - New York NY  (W)\"\n"
+"      ],\n"
+"      [\n"
+"        \"Herald Square- Macy's - New York NY\"\n"
+"      ],\n"
+"      [\n"
+"        \"Macy's 5th Floor - Herald Square - New York NY  (W)\"\n"
+"      ],\n"
+"      [\n"
+"        \"80th & York - New York NY  (W)\"\n"
+"      ],\n"
+"      [\n"
+"        \"Columbus @ 67th - New York NY  (W)\"\n"
+"      ],\n"
+"      [\n"
+"        \"45th & Broadway - New York NY  (W)\"\n"
+"      ],\n"
+"      [\n"
+"        \"Marriott Marquis - Lobby - New York NY\"\n"
+"      ],\n"
+"      [\n"
+"        \"Second @ 81st - New York NY  (W)\"\n"
+"      ],\n"
+"      [\n"
+"        \"52nd & Seventh - New York NY  (W)\"\n"
+"      ]\n"
+"    ]\n"
+"  ]\n"
+"]\n"
+"~~~"
+msgstr ""
+
+msgid "Of course you can specify conditions via the `query` option:"
+msgstr ""
+
+msgid ""
+"~~~\n"
+"$ curl \"$endpoint/d/select?table=Store&query=Columbus&match_columns=name&outpu"
+"t_columns=name&limit=10\" | jq \".\"\n"
+"[\n"
+"  [\n"
+"    0,\n"
+"    1398670157.661574,\n"
+"    0.0012705326080322266\n"
+"  ],\n"
+"  [\n"
+"    [\n"
+"      [\n"
+"        2\n"
+"      ],\n"
+"      [\n"
+"        [\n"
+"          \"_key\",\n"
+"          \"ShortText\"\n"
+"        ]\n"
+"      ],\n"
+"      [\n"
+"        \"Columbus @ 67th - New York NY  (W)\"\n"
+"      ],\n"
+"      [\n"
+"        \"2 Columbus Ave. - New York NY  (W)\"\n"
+"      ]\n"
+"    ]\n"
+"  ]\n"
+"]\n"
+"$ curl \"$endpoint/d/select?table=Store&filter=name@'Ave'&output_columns=name&l"
+"imit=10\" | jq \".\"\n"
+"[\n"
+"  [\n"
+"    0,\n"
+"    1398670586.193325,\n"
+"    0.0003848075866699219\n"
+"  ],\n"
+"  [\n"
+"    [\n"
+"      [\n"
+"        3\n"
+"      ],\n"
+"      [\n"
+"        [\n"
+"          \"_key\",\n"
+"          \"ShortText\"\n"
+"        ]\n"
+"      ],\n"
+"      [\n"
+"        \"2nd Ave. & 9th Street - New York NY\"\n"
+"      ],\n"
+"      [\n"
+"        \"84th & Third Ave - New York NY  (W)\"\n"
+"      ],\n"
+"      [\n"
+"        \"2 Columbus Ave. - New York NY  (W)\"\n"
+"      ]\n"
+"    ]\n"
+"  ]\n"
+"]\n"
+"~~~"
+msgstr ""
+
+msgid "## Conclusion"
+msgstr ""
+
+msgid ""
+"In this tutorial, you did set up a [Droonga][] cluster on [Ubuntu Linux][Ubunt"
+"u] or [CentOS][] computers.\n"
+"Moreover, you load data to it and select data from it successfully, as a [Groo"
+"nga][] compatible server."
+msgstr ""
+
+msgid ""
+"Currently, Droonga supports only some limited features of Groonga compatible c"
+"ommands.\n"
+"See the [command reference][] for more details."
+msgstr ""
+
+msgid ""
+"Next, let's learn [how to backup and restore contents of a Droonga cluster](.."
+"/dump-restore/)."
+msgstr ""
+
+msgid ""
+"  [Ubuntu]: http://www.ubuntu.com/\n"
+"  [CentOS]: https://www.centos.org/\n"
+"  [Droonga]: https://droonga.org/\n"
+"  [Groonga]: http://groonga.org/\n"
+"  [command reference]: ../../reference/commands/"
+msgstr ""

  Added: _po/ja/tutorial/1.1.0/index.po (+40 -0) 100644
===================================================================
--- /dev/null
+++ _po/ja/tutorial/1.1.0/index.po    2014-11-30 23:20:40 +0900 (07cf6f3)
@@ -0,0 +1,40 @@
+msgid ""
+msgstr ""
+"Project-Id-Version: PACKAGE VERSION\n"
+"PO-Revision-Date: 2014-11-30 23:19+0900\n"
+"Language: ja\n"
+"MIME-Version: 1.0\n"
+"Content-Type: text/plain; charset=UTF-8\n"
+"Content-Transfer-Encoding: 8bit\n"
+"Plural-Forms: nplurals=1; plural=0;\n"
+
+msgid ""
+"---\n"
+"title: Droonga tutorial\n"
+"layout: en\n"
+"---"
+msgstr ""
+
+msgid "## For beginners and Groonga users"
+msgstr ""
+
+msgid ""
+" * [Getting started/How to migrate from Groonga?](groonga/)\n"
+"   * [How to prepare virtual machines for experiments?](virtual-machines-for-e"
+"xperiments/)\n"
+" * [How to backup and restore the database?](dump-restore/)\n"
+" * [How to add a new replica to an existing cluster?](add-replica/)\n"
+" * [How to benchmark Droonga with Groonga?](benchmark/)"
+msgstr ""
+
+msgid "## For low-layer application developers"
+msgstr ""
+
+msgid " * [Basic usage of low-layer commands](basic/)"
+msgstr ""
+
+msgid "## For plugin developers"
+msgstr ""
+
+msgid " * [Plugin development tutorial](plugin-development/)"
+msgstr ""

  Added: _po/ja/tutorial/1.1.0/plugin-development/adapter/index.po (+946 -0) 100644
===================================================================
--- /dev/null
+++ _po/ja/tutorial/1.1.0/plugin-development/adapter/index.po    2014-11-30 23:20:40 +0900 (423f0b7)
@@ -0,0 +1,946 @@
+msgid ""
+msgstr ""
+"Project-Id-Version: PACKAGE VERSION\n"
+"PO-Revision-Date: 2014-11-30 23:19+0900\n"
+"Language: ja\n"
+"MIME-Version: 1.0\n"
+"Content-Type: text/plain; charset=UTF-8\n"
+"Content-Transfer-Encoding: 8bit\n"
+"Plural-Forms: nplurals=1; plural=0;\n"
+
+msgid ""
+"---\n"
+"title: \"Plugin: Adapt requests and responses, to add a new command based on ot"
+"her existing commands\"\n"
+"layout: en\n"
+"---"
+msgstr ""
+
+msgid ""
+"* TOC\n"
+"{:toc}"
+msgstr ""
+
+msgid "## The goal of this tutorial"
+msgstr ""
+
+msgid "Learning steps to develop a Droonga plugin by yourself."
+msgstr ""
+
+msgid ""
+"This page focuses on the \"adaption\" by Droonga plugins.\n"
+"At the last, we create a new command `storeSearch` based on the existing `sear"
+"ch` command, with a small practical plugin."
+msgstr ""
+
+msgid "## Precondition"
+msgstr ""
+
+msgid "* You must complete the [basic tutorial][]."
+msgstr ""
+
+msgid "## Adaption for incoming messages"
+msgstr ""
+
+msgid ""
+"First, let's study basics with a simple logger plugin named `sample-logger` af"
+"fects at the adaption phase."
+msgstr ""
+
+msgid ""
+"We sometime need to modify incoming requests from outside to Droonga Engine.\n"
+"We can use a plugin for this purpose."
+msgstr ""
+
+msgid ""
+"Let's see how to create a plugin for the *pre adaption phase*, in this section"
+"."
+msgstr ""
+
+msgid "### Directory Structure"
+msgstr ""
+
+msgid ""
+"Assume that we are going to add a new plugin to the system built in the [basic"
+" tutorial][].\n"
+"In that tutorial, Droonga engine was placed under `engine` directory."
+msgstr ""
+
+msgid ""
+"Plugins need to be placed in an appropriate directory. Let's create the direct"
+"ory:"
+msgstr ""
+
+msgid ""
+"~~~\n"
+"# cd engine\n"
+"# mkdir -p lib/droonga/plugins\n"
+"~~~"
+msgstr ""
+
+msgid "After creating the directory, the directory structure should be like this:"
+msgstr ""
+
+msgid ""
+"~~~\n"
+"engine\n"
+"├── catalog.json\n"
+"├── fluentd.conf\n"
+"└── lib\n"
+"    └── droonga\n"
+"        └── plugins\n"
+"~~~"
+msgstr ""
+
+msgid "### Create a plugin"
+msgstr ""
+
+msgid ""
+"You must put codes for a plugin into a file which has the name *same to the pl"
+"ugin itself*.\n"
+"Because the plugin now you creating is `sample-logger`, put codes into a file "
+"`sample-logger.rb` in the `droonga/plugins` directory."
+msgstr ""
+
+msgid "lib/droonga/plugins/sample-logger.rb:"
+msgstr ""
+
+msgid ""
+"~~~ruby\n"
+"require \"droonga/plugin\""
+msgstr ""
+
+msgid ""
+"module Droonga\n"
+"  module Plugins\n"
+"    module SampleLoggerPlugin\n"
+"      extend Plugin\n"
+"      register(\"sample-logger\")"
+msgstr ""
+
+msgid ""
+"      class Adapter < Droonga::Adapter\n"
+"        # You'll put codes to modify messages here.\n"
+"      end\n"
+"    end\n"
+"  end\n"
+"end\n"
+"~~~"
+msgstr ""
+
+msgid "This plugin does nothing except registering itself to the Droonga Engine."
+msgstr ""
+
+msgid ""
+" * The `sample-logger` is the name of the plugin itself. You'll use it in your"
+" `catalog.json`, to activate the plugin.\n"
+" * As the example above, you must define your plugin as a module.\n"
+" * Behaviors at the pre adaption phase is defined a class called *adapter*.\n"
+"   An adapter class must be defined as a subclass of the `Droonga::Adapter`, u"
+"nder the namespace of the plugin module."
+msgstr ""
+
+msgid "### Activate the plugin with `catalog.json`"
+msgstr ""
+
+msgid ""
+"You need to update `catalog.json` to activate your plugin.\n"
+"Insert the name of the plugin `\"sample-logger\"` to the `\"plugins\"` list under "
+"the dataset, like:"
+msgstr ""
+
+msgid "catalog.json:"
+msgstr ""
+
+msgid ""
+"~~~\n"
+"(snip)\n"
+"      \"datasets\": {\n"
+"        \"Starbucks\": {\n"
+"          (snip)\n"
+"          \"plugins\": [\"sample-logger\", \"groonga\", \"crud\", \"search\", \"dump\", \"s"
+"tatus\"],\n"
+"(snip)\n"
+"~~~"
+msgstr ""
+
+msgid ""
+"Note: you must place `\"sample-logger\"` before `\"search\"`, because the `sample-"
+"logger` plugin depends on the `search`. Droonga Engine applies plugins at the "
+"pre adaption phase in the order defined in the `catalog.json`, so you must res"
+"olve plugin dependencies by your hand (for now)."
+msgstr ""
+
+msgid "### Run and test"
+msgstr ""
+
+msgid ""
+"Let's get Droonga started.\n"
+"Note that you need to specify `./lib` directory in `RUBYLIB` environment varia"
+"ble in order to make ruby possible to find your plugin."
+msgstr ""
+
+msgid ""
+"~~~\n"
+"# kill $(cat fluentd.pid)\n"
+"# RUBYLIB=./lib fluentd --config fluentd.conf --log fluentd.log --daemon fluen"
+"td.pid\n"
+"~~~"
+msgstr ""
+
+msgid ""
+"Then, verify that the engine is correctly working.\n"
+"First, create a request as a JSON."
+msgstr ""
+
+msgid "search-columbus.json:"
+msgstr ""
+
+msgid ""
+"~~~json\n"
+"{\n"
+"  \"dataset\" : \"Starbucks\",\n"
+"  \"type\"    : \"search\",\n"
+"  \"body\"    : {\n"
+"    \"queries\" : {\n"
+"      \"stores\" : {\n"
+"        \"source\"    : \"Store\",\n"
+"        \"condition\" : {\n"
+"          \"query\"   : \"Columbus\",\n"
+"          \"matchTo\" : \"_key\"\n"
+"        },\n"
+"        \"output\" : {\n"
+"          \"elements\"   : [\n"
+"            \"startTime\",\n"
+"            \"elapsedTime\",\n"
+"            \"count\",\n"
+"            \"attributes\",\n"
+"            \"records\"\n"
+"          ],\n"
+"          \"attributes\" : [\"_key\"],\n"
+"          \"limit\"      : -1\n"
+"        }\n"
+"      }\n"
+"    }\n"
+"  }\n"
+"}\n"
+"~~~"
+msgstr ""
+
+msgid ""
+"This is corresponding to the example to search \"Columbus\" in the [basic tutori"
+"al][].\n"
+"Note that the request for the Protocol Adapter is encapsulated in `\"body\"` ele"
+"ment."
+msgstr ""
+
+msgid "Send the request to engine with `droonga-request`:"
+msgstr ""
+
+msgid ""
+"~~~\n"
+"# droonga-request --tag starbucks search-columbus.json\n"
+"Elapsed time: 0.021544\n"
+"[\n"
+"  \"droonga.message\",\n"
+"  1392617533,\n"
+"  {\n"
+"    \"inReplyTo\": \"1392617533.9644868\",\n"
+"    \"statusCode\": 200,\n"
+"    \"type\": \"search.result\",\n"
+"    \"body\": {\n"
+"      \"stores\": {\n"
+"        \"count\": 2,\n"
+"        \"records\": [\n"
+"          [\n"
+"            \"Columbus @ 67th - New York NY  (W)\"\n"
+"          ],\n"
+"          [\n"
+"            \"2 Columbus Ave. - New York NY  (W)\"\n"
+"          ]\n"
+"        ]\n"
+"      }\n"
+"    }\n"
+"  }\n"
+"]\n"
+"~~~"
+msgstr ""
+
+msgid "This is the search result."
+msgstr ""
+
+msgid "### Do something in the plugin: take logs"
+msgstr ""
+
+msgid ""
+"The plugin we have created do nothing so far. Let's get the plugin to do some "
+"interesting."
+msgstr ""
+
+msgid "First of all, trap `search` request and log it. Update the plugin like below:"
+msgstr ""
+
+msgid ""
+"~~~ruby\n"
+"(snip)\n"
+"    module SampleLoggerPlugin\n"
+"      extend Plugin\n"
+"      register(\"sample-logger\")"
+msgstr ""
+
+msgid ""
+"      class Adapter < Droonga::Adapter\n"
+"        input_message.pattern = [\"type\", :equal, \"search\"]"
+msgstr ""
+
+msgid ""
+"        def adapt_input(input_message)\n"
+"          logger.info(\"SampleLoggerPlugin::Adapter\", :message => input_message"
+")\n"
+"        end\n"
+"      end\n"
+"    end\n"
+"(snip)\n"
+"~~~"
+msgstr ""
+
+msgid ""
+"The line beginning with `input_message.pattern` is a configuration.\n"
+"This example defines a plugin for any incoming message with `\"type\":\"search\"`."
+"\n"
+"See the [reference manual's configuration section](../../../reference/plugin/a"
+"dapter/#config)"
+msgstr ""
+
+msgid ""
+"The method `adapt_input` is called for every incoming message matching to the "
+"pattern.\n"
+"The argument `input_message` is a wrapped version of the incoming message."
+msgstr ""
+
+msgid "Restart fluentd:"
+msgstr ""
+
+msgid "Send the request same as the previous section:"
+msgstr ""
+
+msgid ""
+"~~~\n"
+"# droonga-request --tag starbucks search-columbus.json\n"
+"Elapsed time: 0.014714\n"
+"[\n"
+"  \"droonga.message\",\n"
+"  1392618037,\n"
+"  {\n"
+"    \"inReplyTo\": \"1392618037.935901\",\n"
+"    \"statusCode\": 200,\n"
+"    \"type\": \"search.result\",\n"
+"    \"body\": {\n"
+"      \"stores\": {\n"
+"        \"count\": 2,\n"
+"        \"records\": [\n"
+"          [\n"
+"            \"Columbus @ 67th - New York NY  (W)\"\n"
+"          ],\n"
+"          [\n"
+"            \"2 Columbus Ave. - New York NY  (W)\"\n"
+"          ]\n"
+"        ]\n"
+"      }\n"
+"    }\n"
+"  }\n"
+"]\n"
+"~~~"
+msgstr ""
+
+msgid "You will see something like below fluentd's log in `fluentd.log`:"
+msgstr ""
+
+msgid ""
+"~~~\n"
+"2014-02-17 15:20:37 +0900 [info]: SampleLoggerPlugin::Adapter message=#<Droong"
+"a::InputMessage:0x007f8ae3e1dd98 @raw_message={\"dataset\"=>\"Starbucks\", \"type\"="
+">\"search\", \"body\"=>{\"queries\"=>{\"stores\"=>{\"source\"=>\"Store\", \"condition\"=>{\"q"
+"uery\"=>\"Columbus\", \"matchTo\"=>\"_key\"}, \"output\"=>{\"elements\"=>[\"startTime\", \"e"
+"lapsedTime\", \"count\", \"attributes\", \"records\"], \"attributes\"=>[\"_key\"], \"limit"
+"\"=>-1}}}}, \"replyTo\"=>{\"type\"=>\"search.result\", \"to\"=>\"127.0.0.1:64591/droonga"
+"\"}, \"id\"=>\"1392618037.935901\", \"date\"=>\"2014-02-17 15:20:37 +0900\", \"appliedAd"
+"apters\"=>[]}>\n"
+"~~~"
+msgstr ""
+
+msgid ""
+"This shows the message is received by our `SampleLoggerPlugin::Adapter` and th"
+"en passed to Droonga. Here we can modify the message before the actual data pr"
+"ocessing."
+msgstr ""
+
+msgid "### Modify messages with the plugin"
+msgstr ""
+
+msgid ""
+"Suppose that we want to restrict the number of records returned in the respons"
+"e, say `1`.\n"
+"What we need to do is set `limit` to be `1` for every request.\n"
+"Update plugin like below:"
+msgstr ""
+
+msgid ""
+"~~~ruby\n"
+"(snip)\n"
+"        def adapt_input(input_message)\n"
+"          logger.info(\"SampleLoggerPlugin::Adapter\", :message => input_message"
+")\n"
+"          input_message.body[\"queries\"][\"stores\"][\"output\"][\"limit\"] = 1\n"
+"        end\n"
+"(snip)\n"
+"~~~"
+msgstr ""
+
+msgid ""
+"Like above, you can modify the incoming message via methods of the argument `i"
+"nput_message`.\n"
+"See the [reference manual for the message class](../../../reference/plugin/ada"
+"pter/#classes-Droonga-InputMessage)."
+msgstr ""
+
+msgid ""
+"After restart, the response always includes only one record in `records` secti"
+"on."
+msgstr ""
+
+msgid "Send the request same as the previous:"
+msgstr ""
+
+msgid ""
+"~~~\n"
+"# droonga-request --tag starbucks search-columbus.json\n"
+"Elapsed time: 0.017343\n"
+"[\n"
+"  \"droonga.message\",\n"
+"  1392618279,\n"
+"  {\n"
+"    \"inReplyTo\": \"1392618279.0578449\",\n"
+"    \"statusCode\": 200,\n"
+"    \"type\": \"search.result\",\n"
+"    \"body\": {\n"
+"      \"stores\": {\n"
+"        \"count\": 2,\n"
+"        \"records\": [\n"
+"          [\n"
+"            \"Columbus @ 67th - New York NY  (W)\"\n"
+"          ]\n"
+"        ]\n"
+"      }\n"
+"    }\n"
+"  }\n"
+"]\n"
+"~~~"
+msgstr ""
+
+msgid ""
+"Note that `count` is still `2` because `limit` does not affect to `count`. See"
+" [search][] for details of the `search` command."
+msgstr ""
+
+msgid ""
+"~~~\n"
+"2014-02-17 15:24:39 +0900 [info]: SampleLoggerPlugin::Adapter message=#<Droong"
+"a::InputMessage:0x007f956685c908 @raw_message={\"dataset\"=>\"Starbucks\", \"type\"="
+">\"search\", \"body\"=>{\"queries\"=>{\"stores\"=>{\"source\"=>\"Store\", \"condition\"=>{\"q"
+"uery\"=>\"Columbus\", \"matchTo\"=>\"_key\"}, \"output\"=>{\"elements\"=>[\"startTime\", \"e"
+"lapsedTime\", \"count\", \"attributes\", \"records\"], \"attributes\"=>[\"_key\"], \"limit"
+"\"=>-1}}}}, \"replyTo\"=>{\"type\"=>\"search.result\", \"to\"=>\"127.0.0.1:64616/droonga"
+"\"}, \"id\"=>\"1392618279.0578449\", \"date\"=>\"2014-02-17 15:24:39 +0900\", \"appliedA"
+"dapters\"=>[]}>\n"
+"~~~"
+msgstr ""
+
+msgid "## Adaption for outgoing messages"
+msgstr ""
+
+msgid ""
+"In case we need to modify outgoing messages from Droonga Engine, for example, "
+"search results, then we can do it simply by another method.\n"
+"In this section, we are going to define a method to adapt outgoing messages."
+msgstr ""
+
+msgid "### Add a method to adapt outgoing messages"
+msgstr ""
+
+msgid ""
+"Let's take logs of results of `search` command.\n"
+"Define the `adapt_output` method to process outgoing messages.\n"
+"Remove `adapt_input` at this moment for the simplicity."
+msgstr ""
+
+msgid ""
+"        def adapt_output(output_message)\n"
+"          logger.info(\"SampleLoggerPlugin::Adapter\", :message => output_messag"
+"e)\n"
+"        end\n"
+"      end\n"
+"    end\n"
+"(snip)\n"
+"~~~"
+msgstr ""
+
+msgid ""
+"The method `adapt_output` is called only for outgoing messages triggered by in"
+"coming messages trapped by the plugin itself, even if there is only the matchi"
+"ng pattern and the `adapt_input` method is not defined.\n"
+"See the [reference manual for plugin developers](../../../reference/plugin/ada"
+"pter/) for more details."
+msgstr ""
+
+msgid "### Run"
+msgstr ""
+
+msgid "Let's restart fluentd:"
+msgstr ""
+
+msgid ""
+"And send search request (Use the same JSON for request as in the previous sect"
+"ion):"
+msgstr ""
+
+msgid ""
+"~~~\n"
+"# droonga-request --tag starbucks search-columbus.json\n"
+"Elapsed time: 0.015491\n"
+"[\n"
+"  \"droonga.message\",\n"
+"  1392619269,\n"
+"  {\n"
+"    \"inReplyTo\": \"1392619269.184789\",\n"
+"    \"statusCode\": 200,\n"
+"    \"type\": \"search.result\",\n"
+"    \"body\": {\n"
+"      \"stores\": {\n"
+"        \"count\": 2,\n"
+"        \"records\": [\n"
+"          [\n"
+"            \"Columbus @ 67th - New York NY  (W)\"\n"
+"          ],\n"
+"          [\n"
+"            \"2 Columbus Ave. - New York NY  (W)\"\n"
+"          ]\n"
+"        ]\n"
+"      }\n"
+"    }\n"
+"  }\n"
+"]\n"
+"~~~"
+msgstr ""
+
+msgid "The fluentd's log should be like as follows:"
+msgstr ""
+
+msgid ""
+"~~~\n"
+"2014-02-17 15:41:09 +0900 [info]: SampleLoggerPlugin::Adapter message=#<Droong"
+"a::OutputMessage:0x007fddcad4d5a0 @raw_message={\"dataset\"=>\"Starbucks\", \"type\""
+"=>\"dispatcher\", \"body\"=>{\"stores\"=>{\"count\"=>2, \"records\"=>[[\"Columbus @ 67th "
+"- New York NY  (W)\"], [\"2 Columbus Ave. - New York NY  (W)\"]]}}, \"replyTo\"=>{\""
+"type\"=>\"search.result\", \"to\"=>\"127.0.0.1:64724/droonga\"}, \"id\"=>\"1392619269.18"
+"4789\", \"date\"=>\"2014-02-17 15:41:09 +0900\", \"appliedAdapters\"=>[\"Droonga::Plug"
+"ins::SampleLoggerPlugin::Adapter\", \"Droonga::Plugins::Error::Adapter\"]}>\n"
+"~~~"
+msgstr ""
+
+msgid ""
+"This shows that the result of `search` is passed to the `adapt_output` method "
+"(and logged), then outputted."
+msgstr ""
+
+msgid "### Modify results in the adaption phase"
+msgstr ""
+
+msgid ""
+"Let's modify the result at the *post adaption phase*.\n"
+"For example, add `completedAt` attribute that shows the time completed the req"
+"uest.\n"
+"Update your plugin as follows:"
+msgstr ""
+
+msgid ""
+"~~~ruby\n"
+"(snip)\n"
+"        def adapt_output(output_message)\n"
+"          logger.info(\"SampleLoggerPlugin::Adapter\", :message => output_messag"
+"e)\n"
+"          output_message.body[\"stores\"][\"completedAt\"] = Time.now\n"
+"        end\n"
+"(snip)\n"
+"~~~"
+msgstr ""
+
+msgid ""
+"Like above, you can modify the outgoing message via methods of the argument `o"
+"utput_message`. \n"
+"See the [reference manual for the message class](../../../reference/plugin/ada"
+"pter/#classes-Droonga-OutputMessage)."
+msgstr ""
+
+msgid "Send the same search request:"
+msgstr ""
+
+msgid ""
+"~~~\n"
+"# droonga-request --tag starbucks search-columbus.json\n"
+"Elapsed time: 0.013983\n"
+"[\n"
+"  \"droonga.message\",\n"
+"  1392619528,\n"
+"  {\n"
+"    \"inReplyTo\": \"1392619528.235121\",\n"
+"    \"statusCode\": 200,\n"
+"    \"type\": \"search.result\",\n"
+"    \"body\": {\n"
+"      \"stores\": {\n"
+"        \"count\": 2,\n"
+"        \"records\": [\n"
+"          [\n"
+"            \"Columbus @ 67th - New York NY  (W)\"\n"
+"          ],\n"
+"          [\n"
+"            \"2 Columbus Ave. - New York NY  (W)\"\n"
+"          ]\n"
+"        ],\n"
+"        \"completedAt\": \"2014-02-17T06:45:28.247669Z\"\n"
+"      }\n"
+"    }\n"
+"  }\n"
+"]\n"
+"~~~"
+msgstr ""
+
+msgid ""
+"Now you can see `completedAt` attribute containing the time completed the requ"
+"est.\n"
+"The results in `fluentd.log` will be like this:"
+msgstr ""
+
+msgid ""
+"~~~\n"
+"2014-02-17 15:45:28 +0900 [info]: SampleLoggerPlugin::Adapter message=#<Droong"
+"a::OutputMessage:0x007fd384f3ab60 @raw_message={\"dataset\"=>\"Starbucks\", \"type\""
+"=>\"dispatcher\", \"body\"=>{\"stores\"=>{\"count\"=>2, \"records\"=>[[\"Columbus @ 67th "
+"- New York NY  (W)\"], [\"2 Columbus Ave. - New York NY  (W)\"]]}}, \"replyTo\"=>{\""
+"type\"=>\"search.result\", \"to\"=>\"127.0.0.1:64849/droonga\"}, \"id\"=>\"1392619528.23"
+"5121\", \"date\"=>\"2014-02-17 15:45:28 +0900\", \"appliedAdapters\"=>[\"Droonga::Plug"
+"ins::SampleLoggerPlugin::Adapter\", \"Droonga::Plugins::Error::Adapter\"]}>\n"
+"~~~"
+msgstr ""
+
+msgid "## Adaption for both incoming and outgoing messages"
+msgstr ""
+
+msgid ""
+"We have learned the basics of plugins for the pre adaption phase and the post "
+"adaption phase so far.\n"
+"Let's try to build more practical plugin."
+msgstr ""
+
+msgid ""
+"You may feel the Droonga's `search` command is too flexible for your purpose.\n"
+"Here, we're going to add our own `storeSearch` command to wrap the `search` co"
+"mmand in order to provide an application-specific and simple interface, with a"
+" new plugin named `store-search`."
+msgstr ""
+
+msgid "### Accepting of simple requests"
+msgstr ""
+
+msgid ""
+"First, create the `store-search` plugin.\n"
+"Remember, you must put codes into a file which has the name same to the plugin"
+" now you are creating.\n"
+"So, the file is `store-search.rb` in the `droonga/plugins` directory. Then def"
+"ine your `StoreSearchPlugin` as follows:"
+msgstr ""
+
+msgid "lib/droonga/plugins/store-search.rb:"
+msgstr ""
+
+msgid ""
+"module Droonga\n"
+"  module Plugins\n"
+"    module StoreSearchPlugin\n"
+"      extend Plugin\n"
+"      register(\"store-search\")"
+msgstr ""
+
+msgid ""
+"      class Adapter < Droonga::Adapter\n"
+"        input_message.pattern = [\"type\", :equal, \"storeSearch\"]"
+msgstr ""
+
+msgid ""
+"        def adapt_input(input_message)\n"
+"          logger.info(\"StoreSearchPlugin::Adapter\", :message => input_message)"
+msgstr ""
+
+msgid ""
+"          query = input_message.body[\"query\"]\n"
+"          logger.info(\"storeSearch\", :query => query)"
+msgstr ""
+
+msgid ""
+"          body = {\n"
+"            \"queries\" => {\n"
+"              \"stores\" => {\n"
+"                \"source\"    => \"Store\",\n"
+"                \"condition\" => {\n"
+"                  \"query\"   => query,\n"
+"                  \"matchTo\" => \"_key\",\n"
+"                },\n"
+"                \"output\"    => {\n"
+"                  \"elements\"   => [\n"
+"                    \"startTime\",\n"
+"                    \"elapsedTime\",\n"
+"                    \"count\",\n"
+"                    \"attributes\",\n"
+"                    \"records\",\n"
+"                  ],\n"
+"                  \"attributes\" => [\n"
+"                    \"_key\",\n"
+"                  ],\n"
+"                  \"limit\"      => -1,\n"
+"                }\n"
+"              }\n"
+"            }\n"
+"          }"
+msgstr ""
+
+msgid ""
+"          input_message.type = \"search\"\n"
+"          input_message.body = body\n"
+"        end\n"
+"      end\n"
+"    end\n"
+"  end\n"
+"end\n"
+"~~~"
+msgstr ""
+
+msgid ""
+"Then update your `catalog.json` to activate the plugin.\n"
+"Remove the `sample-logger` plugin previously created."
+msgstr ""
+
+msgid ""
+"~~~\n"
+"(snip)\n"
+"      \"datasets\": {\n"
+"        \"Starbucks\": {\n"
+"          (snip)\n"
+"          \"plugins\": [\"store-search\", \"groonga\", \"crud\", \"search\", \"dump\", \"st"
+"atus\"],\n"
+"(snip)\n"
+"~~~"
+msgstr ""
+
+msgid ""
+"Remember, you must place your plugin `\"store-search\"` before the `\"search\"` be"
+"cause yours depends on it."
+msgstr ""
+
+msgid "Now you can use this new command by the following request:"
+msgstr ""
+
+msgid "store-search-columbus.json:"
+msgstr ""
+
+msgid ""
+"~~~json\n"
+"{\n"
+"  \"dataset\" : \"Starbucks\",\n"
+"  \"type\"    : \"storeSearch\",\n"
+"  \"body\"    : {\n"
+"    \"query\" : \"Columbus\"\n"
+"  }\n"
+"}\n"
+"~~~"
+msgstr ""
+
+msgid "In order to issue this request, you need to run:"
+msgstr ""
+
+msgid ""
+"~~~\n"
+"# droonga-request --tag starbucks store-search-columbus.json\n"
+"Elapsed time: 0.01494\n"
+"[\n"
+"  \"droonga.message\",\n"
+"  1392621168,\n"
+"  {\n"
+"    \"inReplyTo\": \"1392621168.0119512\",\n"
+"    \"statusCode\": 200,\n"
+"    \"type\": \"storeSearch.result\",\n"
+"    \"body\": {\n"
+"      \"stores\": {\n"
+"        \"count\": 2,\n"
+"        \"records\": [\n"
+"          [\n"
+"            \"Columbus @ 67th - New York NY  (W)\"\n"
+"          ],\n"
+"          [\n"
+"            \"2 Columbus Ave. - New York NY  (W)\"\n"
+"          ]\n"
+"        ]\n"
+"      }\n"
+"    }\n"
+"  }\n"
+"]\n"
+"~~~"
+msgstr ""
+
+msgid "And you will see the result on fluentd's log in `fluentd.log`:"
+msgstr ""
+
+msgid ""
+"~~~\n"
+"2014-02-17 16:12:48 +0900 [info]: StoreSearchPlugin::Adapter message=#<Droonga"
+"::InputMessage:0x007fe4791d3958 @raw_message={\"dataset\"=>\"Starbucks\", \"type\"=>"
+"\"storeSearch\", \"body\"=>{\"query\"=>\"Columbus\"}, \"replyTo\"=>{\"type\"=>\"storeSearch"
+".result\", \"to\"=>\"127.0.0.1:49934/droonga\"}, \"id\"=>\"1392621168.0119512\", \"date\""
+"=>\"2014-02-17 16:12:48 +0900\", \"appliedAdapters\"=>[]}>\n"
+"2014-02-17 16:12:48 +0900 [info]: storeSearch query=\"Columbus\"\n"
+"~~~"
+msgstr ""
+
+msgid "Now we can perform store search with simple requests."
+msgstr ""
+
+msgid ""
+"Note: look at the `\"type\"` of the response message. Now it became `\"storeSearc"
+"h.result\"`, from `\"search.result\"`. Because it is triggered from the incoming "
+"message with the type `\"storeSearch\"`, the outgoing message has the type `\"(in"
+"coming command).result\"` automatically. In other words, you don't have to chan"
+"ge the type of the outgoing messages, like `input_message.type = \"search\"` in "
+"the method `adapt_input`."
+msgstr ""
+
+msgid "### Returning of simple responses"
+msgstr ""
+
+msgid ""
+"Second, let's return results in more simple way: just an array of the names of"
+" stores."
+msgstr ""
+
+msgid "Define the `adapt_output` method as follows."
+msgstr ""
+
+msgid ""
+"~~~ruby\n"
+"(snip)\n"
+"    module StoreSearchPlugin\n"
+"      extend Plugin\n"
+"      register(\"store-search\")"
+msgstr ""
+
+msgid ""
+"      class Adapter < Droonga::Adapter\n"
+"        (snip)"
+msgstr ""
+
+msgid ""
+"        def adapt_output(output_message)\n"
+"          logger.info(\"StoreSearchPlugin::Adapter\", :message => output_message"
+")"
+msgstr ""
+
+msgid ""
+"          records = output_message.body[\"stores\"][\"records\"]\n"
+"          simplified_results = records.flatten"
+msgstr ""
+
+msgid ""
+"          output_message.body = simplified_results\n"
+"        end\n"
+"      end\n"
+"    end\n"
+"(snip)\n"
+"~~~"
+msgstr ""
+
+msgid ""
+"The `adapt_output` method receives outgoing messages only corresponding to the"
+" incoming messages trapped by the plugin."
+msgstr ""
+
+msgid "Send the request:"
+msgstr ""
+
+msgid ""
+"~~~\n"
+"# droonga-request --tag starbucks store-search-columbus.json\n"
+"Elapsed time: 0.014859\n"
+"[\n"
+"  \"droonga.message\",\n"
+"  1392621288,\n"
+"  {\n"
+"    \"inReplyTo\": \"1392621288.158763\",\n"
+"    \"statusCode\": 200,\n"
+"    \"type\": \"storeSearch.result\",\n"
+"    \"body\": [\n"
+"      \"Columbus @ 67th - New York NY  (W)\",\n"
+"      \"2 Columbus Ave. - New York NY  (W)\"\n"
+"    ]\n"
+"  }\n"
+"]\n"
+"~~~"
+msgstr ""
+
+msgid "The log in `fluentd.log` will be like this:"
+msgstr ""
+
+msgid ""
+"~~~\n"
+"2014-02-17 16:14:48 +0900 [info]: StoreSearchPlugin::Adapter message=#<Droonga"
+"::InputMessage:0x007ffb8ada9d68 @raw_message={\"dataset\"=>\"Starbucks\", \"type\"=>"
+"\"storeSearch\", \"body\"=>{\"query\"=>\"Columbus\"}, \"replyTo\"=>{\"type\"=>\"storeSearch"
+".result\", \"to\"=>\"127.0.0.1:49960/droonga\"}, \"id\"=>\"1392621288.158763\", \"date\"="
+">\"2014-02-17 16:14:48 +0900\", \"appliedAdapters\"=>[]}>\n"
+"2014-02-17 16:14:48 +0900 [info]: storeSearch query=\"Columbus\"\n"
+"2014-02-17 16:14:48 +0900 [info]: StoreSearchPlugin::Adapter message=#<Droonga"
+"::OutputMessage:0x007ffb8ad78e48 @raw_message={\"dataset\"=>\"Starbucks\", \"type\"="
+">\"dispatcher\", \"body\"=>{\"stores\"=>{\"count\"=>2, \"records\"=>[[\"Columbus @ 67th -"
+" New York NY  (W)\"], [\"2 Columbus Ave. - New York NY  (W)\"]]}}, \"replyTo\"=>{\"t"
+"ype\"=>\"storeSearch.result\", \"to\"=>\"127.0.0.1:49960/droonga\"}, \"id\"=>\"139262128"
+"8.158763\", \"date\"=>\"2014-02-17 16:14:48 +0900\", \"appliedAdapters\"=>[\"Droonga::"
+"Plugins::StoreSearchPlugin::Adapter\", \"Droonga::Plugins::Error::Adapter\"], \"or"
+"iginalTypes\"=>[\"storeSearch\"]}>\n"
+"~~~"
+msgstr ""
+
+msgid "Now you've got the simplified response."
+msgstr ""
+
+msgid ""
+"In the way just described, we can use adapter to implement the application spe"
+"cific search logic."
+msgstr ""
+
+msgid "## Conclusion"
+msgstr ""
+
+msgid ""
+"We have learned how to add a new command based only on a custom adapter and an"
+" existing command.\n"
+"In the process, we also have learned how to receive and modify messages, both "
+"of incoming and outgoing."
+msgstr ""
+
+msgid ""
+"See also the [reference manual](../../../reference/plugin/adapter/) for more d"
+"etails."
+msgstr ""
+
+msgid ""
+"  [basic tutorial]: ../../basic/\n"
+"  [overview]: ../../../overview/\n"
+"  [search]: ../../../reference/commands/select/"
+msgstr ""

  Added: _po/ja/tutorial/1.1.0/plugin-development/handler/index.po (+715 -0) 100644
===================================================================
--- /dev/null
+++ _po/ja/tutorial/1.1.0/plugin-development/handler/index.po    2014-11-30 23:20:40 +0900 (7a47bcf)
@@ -0,0 +1,715 @@
+msgid ""
+msgstr ""
+"Project-Id-Version: PACKAGE VERSION\n"
+"PO-Revision-Date: 2014-11-30 23:19+0900\n"
+"Language: ja\n"
+"MIME-Version: 1.0\n"
+"Content-Type: text/plain; charset=UTF-8\n"
+"Content-Transfer-Encoding: 8bit\n"
+"Plural-Forms: nplurals=1; plural=0;\n"
+
+msgid ""
+"---\n"
+"title: \"Plugin: Handle requests on all volumes, to add a new command working a"
+"round the storage\"\n"
+"layout: en\n"
+"---"
+msgstr ""
+
+msgid ""
+"* TOC\n"
+"{:toc}"
+msgstr ""
+
+msgid "## The goal of this tutorial"
+msgstr ""
+
+msgid ""
+"This tutorial aims to help you to learn how to develop plugins which do someth"
+"ing dispersively for/in each volume, around the handling phase.\n"
+"In other words, this tutorial describes *how to add a new simple command to th"
+"e Droonga Engine*."
+msgstr ""
+
+msgid "## Precondition"
+msgstr ""
+
+msgid "* You must complete the [tutorial for the adaption phase][adapter]."
+msgstr ""
+
+msgid "## Handling of requests"
+msgstr ""
+
+msgid ""
+"When a request is transferred from the adaption phase, the Droonga Engine ente"
+"rs into the *processing phase*."
+msgstr ""
+
+msgid ""
+"In the processing phase, the Droonga Engine processes the request step by step"
+".\n"
+"One *step* is constructed from some sub phases: *planning phase*, *distributio"
+"n phase*, *handling phase*, and *collection phase*."
+msgstr ""
+
+msgid ""
+" * At the *planning phase*, the Droonga Engine generates multiple sub steps to"
+" process the request.\n"
+"   In simple cases, you don't have to write codes for this phase, then there i"
+"s just one sub step to handle the request.\n"
+" * At the *distribution phase*, the Droonga Engine distributes task messages f"
+"or the request, to multiple volumes.\n"
+"   (It is completely done by the Droonga Engine itself, so this phase is not p"
+"luggable.)\n"
+" * At the *handling phase*, *each single volume simply processes only one dist"
+"ributed task message as its input, and returns a result.*\n"
+"   This is the time that actual storage accesses happen.\n"
+"   Actually, some commands (`search`, `add`, `create_table` and so on) access "
+"to the storage at the time.\n"
+" * At the *collection phase*, the Droonga Engine collects results from volumes"
+" to one unified result.\n"
+"   There are some useful generic collectors, so you don't have to write codes "
+"for this phase in most cases."
+msgstr ""
+
+msgid ""
+"After all steps are finished, the Droonga Engine transfers the result to the p"
+"ost adaption phase."
+msgstr ""
+
+msgid ""
+"A class to define operations at the handling phase is called *handler*.\n"
+"Put simply, adding of a new handler means adding a new command."
+msgstr ""
+
+msgid "## Design a read-only command `countRecords`"
+msgstr ""
+
+msgid ""
+"Here, in this tutorial, we are going to add a new custom `countRecords` comman"
+"d.\n"
+"At first, let's design it."
+msgstr ""
+
+msgid ""
+"The command reports the number of records about a specified table, for each si"
+"ngle volume.\n"
+"So it will help you to know how records are distributed in the cluster.\n"
+"Nothing is changed by the command, so it is a *read-only command*."
+msgstr ""
+
+msgid "The request must have the name of one table, like:"
+msgstr ""
+
+msgid ""
+"~~~json\n"
+"{\n"
+"  \"dataset\" : \"Starbucks\",\n"
+"  \"type\"    : \"countRecords\",\n"
+"  \"body\"    : {\n"
+"    \"table\": \"Store\"\n"
+"  }\n"
+"}\n"
+"~~~"
+msgstr ""
+
+msgid ""
+"Create a JSON file `count-records.json` with the content above.\n"
+"We'll use it for testing."
+msgstr ""
+
+msgid ""
+"The response must have number of records in the table, for each single volume."
+"\n"
+"They can be appear in an array, like:"
+msgstr ""
+
+msgid ""
+"~~~json\n"
+"{\n"
+"  \"inReplyTo\": \"(message id)\",\n"
+"  \"statusCode\": 200,\n"
+"  \"type\": \"countRecords.result\",\n"
+"  \"body\": [10, 10]\n"
+"}\n"
+"~~~"
+msgstr ""
+
+msgid ""
+"If there are 2 volumes and 20 records are stored evenly, the array will have t"
+"wo elements like above.\n"
+"It means that a volume has 10 records and another one also has 10 records."
+msgstr ""
+
+msgid ""
+"We're going to create a plugin to accept such requests and return such respons"
+"es."
+msgstr ""
+
+msgid "### Directory structure"
+msgstr ""
+
+msgid ""
+"The directory structure for plugins are in same rule as explained in the [tuto"
+"rial for the adaption phase][adapter].\n"
+"Now let's create the `count-records` plugin, as the file `count-records.rb`. T"
+"he directory tree will be:"
+msgstr ""
+
+msgid ""
+"~~~\n"
+"lib\n"
+"└── droonga\n"
+"    └── plugins\n"
+"            └── count-records.rb\n"
+"~~~"
+msgstr ""
+
+msgid "Then, create a skeleton of a plugin as follows:"
+msgstr ""
+
+msgid "lib/droonga/plugins/count-records.rb:"
+msgstr ""
+
+msgid ""
+"~~~ruby\n"
+"require \"droonga/plugin\""
+msgstr ""
+
+msgid ""
+"module Droonga\n"
+"  module Plugins\n"
+"    module CountRecordsPlugin\n"
+"      extend Plugin\n"
+"      register(\"count-records\")\n"
+"    end\n"
+"  end\n"
+"end\n"
+"~~~"
+msgstr ""
+
+msgid "### Define a \"step\" for the command"
+msgstr ""
+
+msgid "Define a \"step\" for the new `countRecords` command, in your plugin. Like:"
+msgstr ""
+
+msgid ""
+"module Droonga\n"
+"  module Plugins\n"
+"    module CountRecordsPlugin\n"
+"      extend Plugin\n"
+"      register(\"count-records\")"
+msgstr ""
+
+msgid ""
+"      define_single_step do |step|\n"
+"        step.name = \"countRecords\"\n"
+"      end\n"
+"    end\n"
+"  end\n"
+"end\n"
+"~~~"
+msgstr ""
+
+msgid ""
+"The `step.name` equals to the name of the command itself.\n"
+"Currently we just define the name of the command.\n"
+"That's all."
+msgstr ""
+
+msgid "### Define the handling logic"
+msgstr ""
+
+msgid ""
+"The command has no handler, so it does nothing yet.\n"
+"Let's define the behavior."
+msgstr ""
+
+msgid ""
+"      define_single_step do |step|\n"
+"        step.name = \"countRecords\"\n"
+"        step.handler = :Handler\n"
+"      end"
+msgstr ""
+
+msgid ""
+"      class Handler < Droonga::Handler\n"
+"        def handle(message)\n"
+"          [0]\n"
+"        end\n"
+"      end\n"
+"    end\n"
+"  end\n"
+"end\n"
+"~~~"
+msgstr ""
+
+msgid "The class `Handler` is a handler class for our new command."
+msgstr ""
+
+msgid ""
+" * It must inherit a builtin-class `Droonga::Handler`.\n"
+" * It implements the logic to handle requests.\n"
+"   Its instance method `#handle` actually handles requests."
+msgstr ""
+
+msgid ""
+"Currently the handler does nothing and returns an result including an array of"
+" a number.\n"
+"The returned value is used to construct the response body."
+msgstr ""
+
+msgid ""
+"The handler is bound to the step with the configuration `step.handler`.\n"
+"Because we define the class `Handler` after `define_single_step`, we specify t"
+"he handler class with a symbol `:Handler`.\n"
+"If you define the handler class before `define_single_step`, then you can writ"
+"e as `step.handler = Handler` simply.\n"
+"Moreover, a class path string like `\"OtherPlugin::Handler\"` is also available."
+msgstr ""
+
+msgid ""
+"Then, we also have to bind a collector to the step, with the configuration `st"
+"ep.collector`."
+msgstr ""
+
+msgid ""
+"~~~ruby\n"
+"# (snip)\n"
+"      define_single_step do |step|\n"
+"        step.name = \"countRecords\"\n"
+"        step.handler = :Handler\n"
+"        step.collector = Collectors::Sum\n"
+"      end\n"
+"# (snip)\n"
+"~~~"
+msgstr ""
+
+msgid ""
+"The `Collectors::Sum` is one of built-in collectors.\n"
+"It merges results returned from handler instances for each volume to one resul"
+"t."
+msgstr ""
+
+msgid "### Activate the plugin with `catalog.json`"
+msgstr ""
+
+msgid ""
+"Update catalog.json to activate this plugin.\n"
+"Add `\"count-records\"` to `\"plugins\"`."
+msgstr ""
+
+msgid ""
+"~~~\n"
+"(snip)\n"
+"      \"datasets\": {\n"
+"        \"Starbucks\": {\n"
+"          (snip)\n"
+"          \"plugins\": [\"count-records\", \"groonga\", \"crud\", \"search\", \"dump\", \"s"
+"tatus\"],\n"
+"(snip)\n"
+"~~~"
+msgstr ""
+
+msgid "### Run and test"
+msgstr ""
+
+msgid ""
+"Let's get Droonga started.\n"
+"Note that you need to specify ./lib directory in RUBYLIB environment variable "
+"in order to make ruby possible to find your plugin."
+msgstr ""
+
+msgid ""
+"    # kill $(cat fluentd.pid)\n"
+"    # RUBYLIB=./lib fluentd --config fluentd.conf --log fluentd.log --daemon f"
+"luentd.pid"
+msgstr ""
+
+msgid ""
+"Then, send a request message for the `countRecords` command to the Droonga Eng"
+"ine."
+msgstr ""
+
+msgid ""
+"~~~\n"
+"# droonga-request --tag starbucks count-records.json\n"
+"Elapsed time: 0.01494\n"
+"[\n"
+"  \"droonga.message\",\n"
+"  1392621168,\n"
+"  {\n"
+"    \"inReplyTo\": \"1392621168.0119512\",\n"
+"    \"statusCode\": 200,\n"
+"    \"type\": \"countRecords.result\",\n"
+"    \"body\": [\n"
+"      0,\n"
+"      0,\n"
+"      0\n"
+"    ]\n"
+"  }\n"
+"]\n"
+"~~~"
+msgstr ""
+
+msgid ""
+"You'll get a response message like above.\n"
+"Look at these points:"
+msgstr ""
+
+msgid ""
+" * The `type` of the response becomes `countRecords.result`.\n"
+"   It is automatically named by the Droonga Engine.\n"
+" * The format of the `body` is same to the returned value of the handler's `ha"
+"ndle` method."
+msgstr ""
+
+msgid "There are three elements in the array. Why?"
+msgstr ""
+
+msgid ""
+" * Remember that the `Starbucks` dataset was configured with two replicas and "
+"three sub volumes for each replica, in the `catalog.json` of [the basic tutori"
+"al][basic].\n"
+" * Because it is a read-only command, a request is delivered to only one repli"
+"ca (and it is chosen at random).\n"
+"   Then only three single volumes receive the command, so only three results a"
+"ppear, not six.\n"
+"   (TODO: I have to add a figure to indicate active nodes: [000, 001, 002, 010"
+", 011, 012] => [000, 001, 002])\n"
+" * The `Collectors::Sum` collects them.\n"
+"   Those three results are joined to just one array by the collector."
+msgstr ""
+
+msgid ""
+"As the result, just one array with three elements appears in the final respons"
+"e."
+msgstr ""
+
+msgid "### Read-only access to the storage"
+msgstr ""
+
+msgid ""
+"Now, each instance of the handler class always returns `0` as its result.\n"
+"Let's implement codes to count up the number of records from the actual storag"
+"e."
+msgstr ""
+
+msgid ""
+"~~~ruby\n"
+"# (snip)\n"
+"      class Handler < Droonga::Handler\n"
+"        def handle(message)\n"
+"          request = message.request\n"
+"          table_name = request[\"table\"]\n"
+"          table = @context[table_name]\n"
+"          count = table.size\n"
+"          [count]\n"
+"        end\n"
+"      end\n"
+"# (snip)\n"
+"~~~"
+msgstr ""
+
+msgid ""
+"Look at the argument of the `handle` method.\n"
+"It is different from the one an adapter receives.\n"
+"A handler receives a message meaning a distributed task.\n"
+"So you have to extract the request message from the distributed task by the co"
+"de `request = message.request`."
+msgstr ""
+
+msgid ""
+"The instance variable `@context` is an instance of `Groonga::Context` for the "
+"storage of the corresponding single volume.\n"
+"See the [class reference of Rroonga][Groonga::Context].\n"
+"You can use any feature of Rroonga via `@context`.\n"
+"For now, we simply access to the table itself by its name and read the value o"
+"f its `size` method - it returns the number of records."
+msgstr ""
+
+msgid ""
+"Then, test it.\n"
+"Restart the Droonga Engine and send the request again."
+msgstr ""
+
+msgid ""
+"~~~\n"
+"# kill $(cat fluentd.pid)\n"
+"# RUBYLIB=./lib fluentd --config fluentd.conf --log fluentd.log --daemon fluen"
+"td.pid\n"
+"# droonga-request --tag starbucks count-records.json\n"
+"Elapsed time: 0.01494\n"
+"[\n"
+"  \"droonga.message\",\n"
+"  1392621168,\n"
+"  {\n"
+"    \"inReplyTo\": \"1392621168.0119512\",\n"
+"    \"statusCode\": 200,\n"
+"    \"type\": \"countRecords.result\",\n"
+"    \"body\": [\n"
+"      14,\n"
+"      15,\n"
+"      11\n"
+"    ]\n"
+"  }\n"
+"]\n"
+"~~~"
+msgstr ""
+
+msgid "Because there are totally 40 records, they are stored evenly like above."
+msgstr ""
+
+msgid "## Design a read-write command `deleteStores`"
+msgstr ""
+
+msgid "Next, let's add another new custom command `deleteStores`."
+msgstr ""
+
+msgid ""
+"The command deletes records of the `Store` table, from the storage.\n"
+"Because it modifies something in existing storage, it is a *read-write command"
+"*."
+msgstr ""
+
+msgid "The request must have the condition to select records to be deleted, like:"
+msgstr ""
+
+msgid ""
+"~~~json\n"
+"{\n"
+"  \"dataset\" : \"Starbucks\",\n"
+"  \"type\"    : \"deleteStores\",\n"
+"  \"body\"    : {\n"
+"    \"keyword\": \"Broadway\"\n"
+"  }\n"
+"}\n"
+"~~~"
+msgstr ""
+
+msgid ""
+"Any record including the given keyword `\"Broadway\"` in its `\"key\"` is deleted "
+"from the storage of all volumes."
+msgstr ""
+
+msgid ""
+"Create a JSON file `delete-stores-broadway.json` with the content above.\n"
+"We'll use it for testing."
+msgstr ""
+
+msgid "The response must have a boolean value to indicate \"success\" or \"fail\", like:"
+msgstr ""
+
+msgid ""
+"~~~json\n"
+"{\n"
+"  \"inReplyTo\": \"(message id)\",\n"
+"  \"statusCode\": 200,\n"
+"  \"type\": \"deleteStores.result\",\n"
+"  \"body\": true\n"
+"}\n"
+"~~~"
+msgstr ""
+
+msgid ""
+"If the request is successfully processed, the `body` becomes `true`. Otherwise"
+" `false`.\n"
+"The `body` is just one boolean value, because we don't have to receive multipl"
+"e results from volumes."
+msgstr ""
+
+msgid "### Directory Structure"
+msgstr ""
+
+msgid ""
+"Now let's create the `delete-stores` plugin, as the file `delete-stores.rb`. T"
+"he directory tree will be:"
+msgstr ""
+
+msgid ""
+"~~~\n"
+"lib\n"
+"└── droonga\n"
+"    └── plugins\n"
+"            └── delete-stores.rb\n"
+"~~~"
+msgstr ""
+
+msgid "lib/droonga/plugins/delete-stores.rb:"
+msgstr ""
+
+msgid ""
+"module Droonga\n"
+"  module Plugins\n"
+"    module DeleteStoresPlugin\n"
+"      extend Plugin\n"
+"      register(\"delete-stores\")\n"
+"    end\n"
+"  end\n"
+"end\n"
+"~~~"
+msgstr ""
+
+msgid "Define a \"step\" for the new `deleteStores` command, in your plugin. Like:"
+msgstr ""
+
+msgid ""
+"module Droonga\n"
+"  module Plugins\n"
+"    module DeleteStoresPlugin\n"
+"      extend Plugin\n"
+"      register(\"delete-stores\")"
+msgstr ""
+
+msgid ""
+"      define_single_step do |step|\n"
+"        step.name = \"deleteStores\"\n"
+"        step.write = true\n"
+"      end\n"
+"    end\n"
+"  end\n"
+"end\n"
+"~~~"
+msgstr ""
+
+msgid ""
+"Look at a new configuration `step.write`.\n"
+"Because this command modifies the storage, we must indicate it clearly."
+msgstr ""
+
+msgid "Let's define the handler."
+msgstr ""
+
+msgid ""
+"      define_single_step do |step|\n"
+"        step.name = \"deleteStores\"\n"
+"        step.write = true\n"
+"        step.handler = :Handler\n"
+"        step.collector = Collectors::And\n"
+"      end"
+msgstr ""
+
+msgid ""
+"      class Handler < Droonga::Handler\n"
+"        def handle(message)\n"
+"          request = message.request\n"
+"          keyword = request[\"keyword\"]\n"
+"          table = @context[\"Store\"]\n"
+"          table.delete do |record|\n"
+"            record.key =~ keyword\n"
+"          end\n"
+"          true\n"
+"        end\n"
+"      end\n"
+"    end\n"
+"  end\n"
+"end\n"
+"~~~"
+msgstr ""
+
+msgid ""
+"Remember, you have to extract the request message from the received task messa"
+"ge."
+msgstr ""
+
+msgid ""
+"The handler finds and deletes existing records which have the given keyword in"
+" its \"key\", by the [API of Rroonga][Groonga::Table_delete]."
+msgstr ""
+
+msgid ""
+"And, the `Collectors::And` is bound to the step by the configuration `step.col"
+"lector`.\n"
+"It is is also one of built-in collectors, and merges boolean values returned f"
+"rom handler instances for each volume, to one boolean value."
+msgstr ""
+
+msgid ""
+"Update catalog.json to activate this plugin.\n"
+"Add `\"delete-stores\"` to `\"plugins\"`."
+msgstr ""
+
+msgid ""
+"~~~\n"
+"(snip)\n"
+"      \"datasets\": {\n"
+"        \"Starbucks\": {\n"
+"          (snip)\n"
+"          \"plugins\": [\"delete-stores\", \"count-records\", \"groonga\", \"crud\", \"se"
+"arch\", \"dump\", \"status\"],\n"
+"(snip)\n"
+"~~~"
+msgstr ""
+
+msgid "Restart the Droonga Engine and send the request."
+msgstr ""
+
+msgid ""
+"~~~\n"
+"# kill $(cat fluentd.pid)\n"
+"# RUBYLIB=./lib fluentd --config fluentd.conf --log fluentd.log --daemon fluen"
+"td.pid\n"
+"# droonga-request --tag starbucks count-records.json\n"
+"Elapsed time: 0.01494\n"
+"[\n"
+"  \"droonga.message\",\n"
+"  1392621168,\n"
+"  {\n"
+"    \"inReplyTo\": \"1392621168.0119512\",\n"
+"    \"statusCode\": 200,\n"
+"    \"type\": \"deleteStores.result\",\n"
+"    \"body\": true\n"
+"  }\n"
+"]\n"
+"~~~"
+msgstr ""
+
+msgid ""
+"Because results from volumes are unified to just one boolean value, the respon"
+"se's `body` is a `true`.\n"
+"As the verification, send the request of `countRecords` command."
+msgstr ""
+
+msgid ""
+"~~~\n"
+"# droonga-request --tag starbucks count-records.json\n"
+"Elapsed time: 0.01494\n"
+"[\n"
+"  \"droonga.message\",\n"
+"  1392621168,\n"
+"  {\n"
+"    \"inReplyTo\": \"1392621168.0119512\",\n"
+"    \"statusCode\": 200,\n"
+"    \"type\": \"countRecords.result\",\n"
+"    \"body\": [\n"
+"      7,\n"
+"      13,\n"
+"      6\n"
+"    ]\n"
+"  }\n"
+"]\n"
+"~~~"
+msgstr ""
+
+msgid ""
+"Note, the number of records are smaller than the previous result.\n"
+"This means that four or some records are deleted from each volume."
+msgstr ""
+
+msgid "## Conclusion"
+msgstr ""
+
+msgid ""
+"We have learned how to add a new simple command working around the data.\n"
+"In the process, we also have learned how to create plugins working in the hand"
+"ling phrase."
+msgstr ""
+
+msgid ""
+"  [adapter]: ../adapter\n"
+"  [basic]: ../basic\n"
+"  [Groonga::Context]: http://ranguba.org/rroonga/en/Groonga/Context.html\n"
+"  [Groonga::Table_delete]: http://ranguba.org/rroonga/en/Groonga/Table.html#de"
+"lete-instance_method"
+msgstr ""

  Added: _po/ja/tutorial/1.1.0/plugin-development/index.po (+159 -0) 100644
===================================================================
--- /dev/null
+++ _po/ja/tutorial/1.1.0/plugin-development/index.po    2014-11-30 23:20:40 +0900 (d21f828)
@@ -0,0 +1,159 @@
+msgid ""
+msgstr ""
+"Project-Id-Version: PACKAGE VERSION\n"
+"PO-Revision-Date: 2014-11-30 23:19+0900\n"
+"Language: ja\n"
+"MIME-Version: 1.0\n"
+"Content-Type: text/plain; charset=UTF-8\n"
+"Content-Transfer-Encoding: 8bit\n"
+"Plural-Forms: nplurals=1; plural=0;\n"
+
+msgid ""
+"---\n"
+"title: Droonga plugin development tutorial\n"
+"layout: en\n"
+"---"
+msgstr ""
+
+msgid ""
+"* TOC\n"
+"{:toc}"
+msgstr ""
+
+msgid "## The goal of this tutorial"
+msgstr ""
+
+msgid ""
+"Learning steps to develop a Droonga plugin by yourself.\n"
+"You must complete the [basic tutorial][] before this."
+msgstr ""
+
+msgid "## What's \"plugin\"?"
+msgstr ""
+
+msgid ""
+"Plugin is one of the most important concept of Droonga.\n"
+"This makes Droonga flexible."
+msgstr ""
+
+msgid ""
+"Generally, data processing tasks in the real world need custom treatments of t"
+"he data, in various stages of the data stream.\n"
+"This is not easy to be done in one-size-fits-all approach."
+msgstr ""
+
+msgid ""
+" * One may want to modify incoming requests to work well with other systems, o"
+"ne may want to modify outgoing responses to help other systems understand the "
+"result.\n"
+" * One may want to do more complex data processing than that provided by Droon"
+"ga as built-in, to have direct storage access for efficiency.\n"
+" * One may need to control data distribution and collection logic of Droonga t"
+"o profit from the distributed nature of Droonga."
+msgstr ""
+
+msgid "You can use plugins in those situations."
+msgstr ""
+
+msgid "## Pluggable operations in Droonga Engine"
+msgstr ""
+
+msgid ""
+"In Droonga Engine, there are 2 large pluggable phases and 3 sub phases for plu"
+"gins.\n"
+"In other words, from the point of view of plugins, each plugin can do from 1 t"
+"o 4 operations.\n"
+"See the [overview][] to grasp the big picture."
+msgstr ""
+
+msgid ""
+"Adaption phase\n"
+": At this phase, a plugin can modify incoming requests and outgoing responses."
+msgstr ""
+
+msgid ""
+"Processing phase\n"
+": At this phase, a plugin can process incoming requests on each volume, step b"
+"y step."
+msgstr ""
+
+msgid "The processing phase includes 3 sub pluggable phases:"
+msgstr ""
+
+msgid ""
+"Handling phase\n"
+": At this phase, a plugin can do low-level data handling, for example, databas"
+"e operations and so on."
+msgstr ""
+
+msgid ""
+"Planning phase\n"
+": At this phase, a plugin can split an incoming request to multiple steps."
+msgstr ""
+
+msgid ""
+"Collection phase\n"
+": At this phase, a plugin can merge results from steps to a unified result."
+msgstr ""
+
+msgid ""
+"However, the point of view of these descriptions is based on the design of the"
+" system itself, so you're maybe confused.\n"
+"Then, let's shift our perspective on pluggable operations - what you want to d"
+"o by a plugin."
+msgstr ""
+
+msgid ""
+"Adding a new command based on another existing command.\n"
+": For example, you possibly want to define a shorthand command wrapping the co"
+"mplex `search` command.\n"
+"  *Adaption* of request and response messages makes it come true."
+msgstr ""
+
+msgid ""
+"Adding a new command working around the storage.\n"
+": For example, you possibly want to modify data stored in the storage as you l"
+"ike.\n"
+"  *Handling* of requests makes it come true."
+msgstr ""
+
+msgid ""
+"Adding a new command for a complex task\n"
+": For example, you possibly want to implement a powerful command like the buil"
+"t-in `search` command.\n"
+"  *Planning and collection* of requests make it come true."
+msgstr ""
+
+msgid ""
+"In this tutorial, we focus on the adaption at first.\n"
+"This is the most \"basic\" usecase of plugins, so it will help you to understand"
+" the overview of Droonga plugin development.\n"
+"Then, we focus on other cases in this order.\n"
+"Following this tutorial, you will learn how to write plugins.\n"
+"This will be the first step to create plugins fit with your own requirements."
+msgstr ""
+
+msgid "## How to develop plugins?"
+msgstr ""
+
+msgid "For more details, let's read these sub tutorials:"
+msgstr ""
+
+msgid ""
+" 1. [Adapt requests and responses, to add a new command based on other existin"
+"g commands][adapter].\n"
+" 2. [Handle requests on all volumes, to add a new command working around the s"
+"torage][handler].\n"
+" 3. Handle requests only on a specific volume, to add a new command around the"
+" storage more smartly. (under construction)\n"
+" 4. Distribute requests and collect responses, to add a new complex command ba"
+"sed on sub tasks. (under construction)"
+msgstr ""
+
+msgid ""
+"  [basic tutorial]: ../basic/\n"
+"  [overview]: ../../overview/\n"
+"  [adapter]: ./adapter/\n"
+"  [handler]: ./handler/\n"
+"  [distribute-collect]: ./distribute-collect/"
+msgstr ""

  Added: _po/ja/tutorial/1.1.0/virtual-machines-for-experiments/index.po (+399 -0) 100644
===================================================================
--- /dev/null
+++ _po/ja/tutorial/1.1.0/virtual-machines-for-experiments/index.po    2014-11-30 23:20:40 +0900 (1a737b7)
@@ -0,0 +1,399 @@
+msgid ""
+msgstr ""
+"Project-Id-Version: PACKAGE VERSION\n"
+"PO-Revision-Date: 2014-11-30 23:19+0900\n"
+"Language: ja\n"
+"MIME-Version: 1.0\n"
+"Content-Type: text/plain; charset=UTF-8\n"
+"Content-Transfer-Encoding: 8bit\n"
+"Plural-Forms: nplurals=1; plural=0;\n"
+
+msgid ""
+"---\n"
+"title: \"Droonga tutorial: How to prepare virtual machines for experiments?\"\n"
+"layout: en\n"
+"---"
+msgstr ""
+
+msgid ""
+"* TOC\n"
+"{:toc}"
+msgstr ""
+
+msgid "## The goal of this tutorial"
+msgstr ""
+
+msgid "Learning steps to prepare multiple (three) virtual machines for experiments."
+msgstr ""
+
+msgid "## Why virtual machines?"
+msgstr ""
+
+msgid ""
+"Because Droonga is a distributed data processing system, you have to prepare m"
+"ultiple computers to construct a cluster.\n"
+"For safety (and good performance) you should use dedicated computers for Droon"
+"ga nodes."
+msgstr ""
+
+msgid ""
+"You need two or more computers for effective replication.\n"
+"If you are trying to manage node structure of your cluster effectively, three "
+"or more computers are required."
+msgstr ""
+
+msgid ""
+"However, it may cost money that using multiple server instances on virtual pri"
+"vate server services, even if you just want to do testing or development.\n"
+"So we recommend you to use private virtual machines on your own PC for such ca"
+"ses."
+msgstr ""
+
+msgid ""
+"Luckly, there is a useful software [Vagrant][] to manage virtual machines easi"
+"ly.\n"
+"This tutorial describes *how to prepare three virtual machines* by Vagrant."
+msgstr ""
+
+msgid "## Prepare a host machine"
+msgstr ""
+
+msgid ""
+"First, you have to prepare a PC as the host of VMs.\n"
+"Because each VM possibly requires much size RAM for building of native extensi"
+"ons, the host machine should have much more RAM - hopefully, 8GB or larger."
+msgstr ""
+
+msgid ""
+"In most cases you don't have to prepare much size RAM for each VM because ther"
+"e are pre-built binaries for major platforms.\n"
+"However, if your VM is running with a minor distribution or an edge version, t"
+"here may be no binary package for your platform. Then it will be compiled auto"
+"matically, requiring 2GB RAM.\n"
+"If you see any strange error while building native extensions, enlarge the siz"
+"e of RAM of each VM and try installation again.\n"
+"(See also the [appendix of this tutorial](#less-size-memory).)"
+msgstr ""
+
+msgid "## Steps to prepare VMs"
+msgstr ""
+
+msgid "### Install the VirtualBox"
+msgstr ""
+
+msgid ""
+"The Vagrant requires a backend to run VMs, so you have to install the most rec"
+"ommended one: [VirtualBox][].\n"
+"For example, if you use an [Ubuntu][] PC, it can be installed via the `apt` co"
+"mmand, like:"
+msgstr ""
+
+msgid ""
+"~~~\n"
+"$ sudo apt-get install virtualbox\n"
+"~~~"
+msgstr ""
+
+msgid ""
+"Otherwise go to the [VirtualBox web site][VirtualBox] and install it as instru"
+"cted."
+msgstr ""
+
+msgid "### Install the Vagrant"
+msgstr ""
+
+msgid ""
+"Next, install [Vagrant][].\n"
+"Go to the [Vagrant web site][Vagrant] and install it as instructed.\n"
+"For example, if you use an Ubuntu PC (x64):"
+msgstr ""
+
+msgid ""
+"~~~\n"
+"$ wget https://dl.bintray.com/mitchellh/vagrant/vagrant_1.6.5_x86_64.deb\n"
+"$ sudo dpkg -i vagrant_1.6.5_x86_64.deb\n"
+"~~~"
+msgstr ""
+
+msgid ""
+"NOTE: You can install Vagrant via `apt-get install vagrant` on Ubuntu 14.04, b"
+"ut don't use it because the version is too old to import boxes from [Vagrant C"
+"loud][]."
+msgstr ""
+
+msgid "### Determine a box and prepare a Vagrantfile"
+msgstr ""
+
+msgid ""
+"Go to the [Vagrant Cloud][] and find a box for your experiments.\n"
+"For example, if you use a [box for Ubuntu Trusty (x64)](https://vagrantcloud.c"
+"om/ubuntu/boxes/trusty64), you just have to do:"
+msgstr ""
+
+msgid ""
+"~~~\n"
+"$ mkdir droonga-ubuntu-trusty\n"
+"$ cd droonga-ubuntu-trusty\n"
+"$ vagrant init ubuntu/trusty64\n"
+"~~~"
+msgstr ""
+
+msgid ""
+"Then a file `Vagrantfile` is automatically generated there.\n"
+"However you should rewrite it completely for experiments of Droonga cluster, l"
+"ike following:"
+msgstr ""
+
+msgid "`Vagrantfile`:"
+msgstr ""
+
+msgid ""
+"~~~\n"
+"n_machines = 3\n"
+"box        = \"ubuntu/trusty64\""
+msgstr ""
+
+msgid ""
+"VAGRANTFILE_API_VERSION = \"2\"\n"
+"Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|\n"
+"  n_machines.times do |index|\n"
+"    config.vm.define :\"node#{index}\" do |node_config|\n"
+"      node_config.vm.box = box\n"
+"      node_config.vm.network(:private_network,\n"
+"                             :ip => \"192.168.100.#{50 + index}\")\n"
+"      node_config.vm.host_name = \"node#{index}\"\n"
+"      node_config.vm.provider(\"virtualbox\") do |virtual_box|\n"
+"        virtual_box.memory = 2048\n"
+"      end\n"
+"    end\n"
+"  end\n"
+"end\n"
+"~~~"
+msgstr ""
+
+msgid ""
+"Note, this `Vagrantfile` defines three VMs with 2GB (2048MB) RAM for each.\n"
+"So your host machine must have 6GB or more RAM.\n"
+"If your machine has less RAM, set the size to `512` (meaning 512MB) for now."
+msgstr ""
+
+msgid "### Start virtual machines"
+msgstr ""
+
+msgid "To start VMs, you just run the command `vagrant up`:"
+msgstr ""
+
+msgid ""
+"~~~\n"
+"$ vagrant up\n"
+"Bringing machine 'node0' up with 'virtualbox' provider...\n"
+"Bringing machine 'node1' up with 'virtualbox' provider...\n"
+"Bringing machine 'node2' up with 'virtualbox' provider...\n"
+"...\n"
+"~~~"
+msgstr ""
+
+msgid ""
+"Then Vagrant automatically downloads VM image from the [Vagrant Cloud][] web s"
+"ite and starts VMs.\n"
+"After preparation processes, there are three running VMs with IP address in a "
+"virtual private network: `192.168.100.50`, `192.168.100.51`, and `192.168.100."
+"52`."
+msgstr ""
+
+msgid ""
+"Let's confirm that they are correctly working.\n"
+"You can log in those VMs by the command `vagrant ssh`, like:"
+msgstr ""
+
+msgid ""
+"~~~\n"
+"$ vagrant ssh node0\n"
+"Welcome to Ubuntu 14.04.1 LTS (GNU/Linux 3.13.0-36-generic x86_64)\n"
+"...\n"
+"vagrant �� node0:~$ exit\n"
+"~~~"
+msgstr ""
+
+msgid "### Register your VMs to your SSH client"
+msgstr ""
+
+msgid ""
+"You have to use `vagrant ssh` instead of regular `ssh`, to log in VMs.\n"
+"Moreover you have to `cd` to the `Vagrantfile`'s directory before running the "
+"command.\n"
+"It is annoying a little."
+msgstr ""
+
+msgid "So, let's register VMs to your local config file of the SSH client, like:"
+msgstr ""
+
+msgid ""
+"~~~\n"
+"$ vagrant ssh-config node0 >> ~/.ssh/config\n"
+"$ vagrant ssh-config node1 >> ~/.ssh/config\n"
+"$ vagrant ssh-config node2 >> ~/.ssh/config\n"
+"~~~"
+msgstr ""
+
+msgid ""
+"After that you can log in to your VMs from the host computer by their name, wi"
+"thout `vagrant ssh` command:"
+msgstr ""
+
+msgid ""
+"~~~\n"
+"$ ssh node0\n"
+"~~~"
+msgstr ""
+
+msgid "### Configure your VMs to access each other by their host name"
+msgstr ""
+
+msgid ""
+"Because there is no name server, each VM cannot resolve host names of others.\n"
+"So you have to type their raw IP addresses for now.\n"
+"It's very annoying."
+msgstr ""
+
+msgid "So, let's modify hosts file on VMs, like:"
+msgstr ""
+
+msgid "`/etc/hosts`:"
+msgstr ""
+
+msgid ""
+"~~~\n"
+"127.0.0.1 localhost\n"
+"192.168.100.50 node0\n"
+"192.168.100.51 node1\n"
+"192.168.100.52 node2\n"
+"~~~"
+msgstr ""
+
+msgid "After that your VMs can communicate with each other by their host name."
+msgstr ""
+
+msgid "### Shutdown VMs"
+msgstr ""
+
+msgid "You can shutdown all VMs by the command `vagrant halt`:"
+msgstr ""
+
+msgid ""
+"~~~\n"
+"$ vagrant halt\n"
+"~~~"
+msgstr ""
+
+msgid "Then Vagrant shuts down all VMs completely."
+msgstr ""
+
+msgid "### Cleanup VMs"
+msgstr ""
+
+msgid ""
+"If you want to clear all changes in VMs, then simply run the command `vagrant "
+"destroy -f`:"
+msgstr ""
+
+msgid ""
+"~~~\n"
+"$ vagrant destroy -f\n"
+"$ vagrant up\n"
+"~~~"
+msgstr ""
+
+msgid ""
+"Then all changes will go away and you can start fresh VMs again.\n"
+"This will help you to improve installation scripts or something."
+msgstr ""
+
+msgid "### Appendix: if your host machine has less size RAM... {#less-size-memory}"
+msgstr ""
+
+msgid "Even if your computer has less size RAM, you don't have to give up."
+msgstr ""
+
+msgid ""
+"2GB RAM for each virtual machine is required just for building native extensio"
+"ns of [Rroonga][].\n"
+"In other words, Droonga nodes can work with less size RAM, if there are existi"
+"ng (already built) binary libraries."
+msgstr ""
+
+msgid "So you can install Droonga services for each VM step by step, like:"
+msgstr ""
+
+msgid ""
+" 1. Shutdown all VMs by `vagrant halt`.\n"
+" 2. Open the VirtualBox console by `virtualbox`.\n"
+" 3. Go to `properties` of a VM, and enlarge the size of RAM to 2GB (2048MB).\n"
+" 4. Start the VM, from the VirtualBox console.\n"
+" 5. Log in to the VM and install Droonga services.\n"
+" 6. Shutdown the VM.\n"
+" 7. Go to `properties` of the VM, and decrease the size of RAM to the original"
+" size.\n"
+" 8. Repeat steps from 3 to 7 for each VM."
+msgstr ""
+
+msgid "### Appendix: direct access to services running on a VM, from other computers"
+msgstr ""
+
+msgid ""
+"If the host machine is just a (remote) server and you are mainly using another"
+" local PC, then you may hope to access HTTP servers running on VMs from your P"
+"C directly.\n"
+"For example, testing the administration page on an web browser (Google Chrome,"
+" Mozilla Firefox, and so on.)"
+msgstr ""
+
+msgid ""
+"Port forwarding of OpenSSH will help you.\n"
+"Let's run following command on your host machine."
+msgstr ""
+
+msgid ""
+"~~~\n"
+"% ssh vagrant �� 192.168.100.50 \\\n"
+"      -i ~/.vagrant.d/insecure_private_key \\\n"
+"      -g \\\n"
+"      -L 20041:localhost:10041\n"
+"~~~"
+msgstr ""
+
+msgid ""
+"Then, actually you can see the administraton page provided by `droonga-http-se"
+"rver` on the VM `node0` (`192.168.100.50`), with the URL:\n"
+"`http://(IP address of hostname of the host machine):20041/`\n"
+"OpenSSH client running on the host machine automatically forwards inpouring pa"
+"ckets from the host machine's port `20041` to the VM's port `10041`."
+msgstr ""
+
+msgid ""
+" * Don't forget to specify the username `vagrant@` and the identity file.\n"
+" * The option `-g` is required to accept requests from outside of the host com"
+"puter itself."
+msgstr ""
+
+msgid "## Conclusion"
+msgstr ""
+
+msgid "In this tutorial, you did prepare three virtual machines for Droonga nodes."
+msgstr ""
+
+msgid ""
+"You can try [the \"getting started\" tutorial](../groonga/) and others with mult"
+"iple nodes."
+msgstr ""
+
+msgid ""
+"  [Vagrant]: https://www.vagrantup.com/\n"
+"  [Vagrant Cloud]: https://vagrantcloud.com/\n"
+"  [VirtualBox]: https://www.virtualbox.org/\n"
+"  [Groonga]: http://groonga.org/\n"
+"  [Rroonga]: https://github.com/ranguba/rroonga\n"
+"  [Ubuntu]: http://www.ubuntu.com/\n"
+"  [Droonga]: https://droonga.org/\n"
+"  [Groonga]: http://groonga.org/"
+msgstr ""

  Added: _po/ja/tutorial/1.1.0/watch.po (+277 -0) 100644
===================================================================
--- /dev/null
+++ _po/ja/tutorial/1.1.0/watch.po    2014-11-30 23:20:40 +0900 (e621284)
@@ -0,0 +1,277 @@
+msgid ""
+msgstr ""
+"Project-Id-Version: PACKAGE VERSION\n"
+"PO-Revision-Date: 2014-11-30 23:19+0900\n"
+"Language: ja\n"
+"MIME-Version: 1.0\n"
+"Content-Type: text/plain; charset=UTF-8\n"
+"Content-Transfer-Encoding: 8bit\n"
+"Plural-Forms: nplurals=1; plural=0;\n"
+
+msgid ""
+"---\n"
+"title: Droonga tutorial\n"
+"layout: en\n"
+"---"
+msgstr ""
+
+msgid ""
+"* TOC\n"
+"{:toc}"
+msgstr ""
+
+msgid "## Real-time search"
+msgstr ""
+
+msgid "Droonga supports streaming-style real-time search."
+msgstr ""
+
+msgid "### Update configurations of the Droonga engine"
+msgstr ""
+
+msgid "Update your fluentd.conf and catalog.jsons, like:"
+msgstr ""
+
+msgid "fluentd.conf:"
+msgstr ""
+
+msgid ""
+"      <source>\n"
+"        type forward\n"
+"        port 24224\n"
+"      </source>\n"
+"      <match starbucks.message>\n"
+"        name localhost:24224/starbucks\n"
+"        type droonga\n"
+"      </match>\n"
+"    + <match droonga.message>\n"
+"    +   name localhost:24224/droonga\n"
+"    +   type droonga\n"
+"    + </match>\n"
+"      <match output.message>\n"
+"        type stdout\n"
+"      </match>"
+msgstr ""
+
+msgid "catalog.json:"
+msgstr ""
+
+msgid ""
+"      {\n"
+"        \"effective_date\": \"2013-09-01T00:00:00Z\",\n"
+"        \"zones\": [\n"
+"    +     \"localhost:24224/droonga\",\n"
+"          \"localhost:24224/starbucks\"\n"
+"        ],\n"
+"        \"farms\": {\n"
+"    +     \"localhost:24224/droonga\": {\n"
+"    +       \"device\": \".\",\n"
+"    +       \"capacity\": 10\n"
+"    +     },\n"
+"          \"localhost:24224/starbucks\": {\n"
+"            \"device\": \".\",\n"
+"            \"capacity\": 10\n"
+"          }\n"
+"        },\n"
+"        \"datasets\": {\n"
+"    +     \"Watch\": {\n"
+"    +       \"workers\": 2,\n"
+"    +       \"plugins\": [\"search\", \"groonga\", \"add\", \"watch\"],\n"
+"    +       \"number_of_replicas\": 1,\n"
+"    +       \"number_of_partitions\": 1,\n"
+"    +       \"partition_key\": \"_key\",\n"
+"    +       \"date_range\": \"infinity\",\n"
+"    +       \"ring\": {\n"
+"    +         \"localhost:23041\": {\n"
+"    +           \"weight\": 50,\n"
+"    +           \"partitions\": {\n"
+"    +             \"2013-09-01\": [\n"
+"    +               \"localhost:24224/droonga.watch\"\n"
+"    +             ]\n"
+"    +           }\n"
+"    +         }\n"
+"    +       }\n"
+"    +     },\n"
+"          \"Starbucks\": {\n"
+"            \"workers\": 0,\n"
+"            \"plugins\": [\"search\", \"groonga\", \"add\"],\n"
+"            \"number_of_replicas\": 2,\n"
+"            \"number_of_partitions\": 2,\n"
+"            \"partition_key\": \"_key\",\n"
+"            \"date_range\": \"infinity\",\n"
+"            \"ring\": {\n"
+"              \"localhost:23041\": {\n"
+"                \"weight\": 50,\n"
+"                \"partitions\": {\n"
+"                  \"2013-09-01\": [\n"
+"                    \"localhost:24224/starbucks.000\",\n"
+"                    \"localhost:24224/starbucks.001\"\n"
+"                  ]\n"
+"                }\n"
+"              },\n"
+"              \"localhost:23042\": {\n"
+"                \"weight\": 50,\n"
+"                \"partitions\": {\n"
+"                  \"2013-09-01\": [\n"
+"                    \"localhost:24224/starbucks.002\",\n"
+"                    \"localhost:24224/starbucks.003\"\n"
+"                  ]\n"
+"                }\n"
+"              }\n"
+"            }\n"
+"          }\n"
+"        },\n"
+"        \"options\": {\n"
+"          \"plugins\": []\n"
+"        }\n"
+"      }"
+msgstr ""
+
+msgid "### Add a streaming API to the protocol adapter"
+msgstr ""
+
+msgid "Add a streaming API to the protocol adapter, like;"
+msgstr ""
+
+msgid "application.js:"
+msgstr ""
+
+msgid ""
+"    var express = require('express'),\n"
+"        droonga = require('express-droonga');"
+msgstr ""
+
+msgid ""
+"    var application = express();\n"
+"    var server = require('http').createServer(application);\n"
+"    server.listen(3000); // the port to communicate with clients"
+msgstr ""
+
+msgid ""
+"    //============== INSERTED ==============\n"
+"    var streaming = {\n"
+"      'streaming': new droonga.command.HTTPStreaming({\n"
+"        dataset: 'Watch',\n"
+"        path: '/watch',\n"
+"        method: 'GET',\n"
+"        subscription: 'watch.subscribe',\n"
+"        unsubscription: 'watch.unsubscribe',\n"
+"        notification: 'watch.notification',\n"
+"        createSubscription: function(request) {\n"
+"          return {\n"
+"            condition: request.query.query\n"
+"          };\n"
+"        }\n"
+"      })\n"
+"    };\n"
+"    //============= /INSERTED =============="
+msgstr ""
+
+msgid ""
+"    application.droonga({\n"
+"      prefix: '/droonga',\n"
+"      tag: 'starbucks',\n"
+"      defaultDataset: 'Starbucks',\n"
+"      server: server, // this is required to initialize Socket.IO API!\n"
+"      plugins: [\n"
+"        droonga.API_REST,\n"
+"        droonga.API_SOCKET_IO,\n"
+"        droonga.API_GROONGA,\n"
+"        droonga.API_DROONGA\n"
+"    //============== INSERTED ==============\n"
+"        ,streaming\n"
+"    //============= /INSERTED ==============\n"
+"      ]\n"
+"    });"
+msgstr ""
+
+msgid ""
+"    application.get('/', function(req, res) {\n"
+"      res.sendfile(__dirname + '/index.html');\n"
+"    });"
+msgstr ""
+
+msgid "### Prepare feeds"
+msgstr ""
+
+msgid "Prepare \"feed\"s like:"
+msgstr ""
+
+msgid "feeds.jsons:"
+msgstr ""
+
+msgid ""
+"    {\"id\":\"feed:0\",\"dataset\":\"Watch\",\"type\":\"watch.feed\",\"body\":{\"targets\":{\"k"
+"ey\":\"old place 0\"}}}\n"
+"    {\"id\":\"feed:1\",\"dataset\":\"Watch\",\"type\":\"watch.feed\",\"body\":{\"targets\":{\"k"
+"ey\":\"new place 0\"}}}\n"
+"    {\"id\":\"feed:2\",\"dataset\":\"Watch\",\"type\":\"watch.feed\",\"body\":{\"targets\":{\"k"
+"ey\":\"old place 1\"}}}\n"
+"    {\"id\":\"feed:3\",\"dataset\":\"Watch\",\"type\":\"watch.feed\",\"body\":{\"targets\":{\"k"
+"ey\":\"new place 1\"}}}\n"
+"    {\"id\":\"feed:4\",\"dataset\":\"Watch\",\"type\":\"watch.feed\",\"body\":{\"targets\":{\"k"
+"ey\":\"old place 2\"}}}\n"
+"    {\"id\":\"feed:5\",\"dataset\":\"Watch\",\"type\":\"watch.feed\",\"body\":{\"targets\":{\"k"
+"ey\":\"new place 2\"}}}"
+msgstr ""
+
+msgid "### Try it!"
+msgstr ""
+
+msgid "At first, restart servers in each console."
+msgstr ""
+
+msgid "The engine:"
+msgstr ""
+
+msgid "    # fluentd --config fluentd.conf"
+msgstr ""
+
+msgid "The protocol adapter:"
+msgstr ""
+
+msgid "    # nodejs application.js"
+msgstr ""
+
+msgid "Next, connect to the streaming API via curl:"
+msgstr ""
+
+msgid "    # curl \"http://localhost:3000/droonga/watch?query=new\""
+msgstr ""
+
+msgid "Then the client starts to receive streamed results."
+msgstr ""
+
+msgid "Next, open a new console and send \"feed\"s to the engine like:"
+msgstr ""
+
+msgid "    # fluent-cat droonga.message < feeds.jsons"
+msgstr ""
+
+msgid ""
+"Then the client receives three results \"new place 0\", \"new place 1\", and \"new "
+"place 2\" like:"
+msgstr ""
+
+msgid ""
+"    {\"targets\":{\"key\":\"new place 0\"}}\n"
+"    {\"targets\":{\"key\":\"new place 1\"}}\n"
+"    {\"targets\":{\"key\":\"new place 2\"}}"
+msgstr ""
+
+msgid ""
+"They are search results for the query \"new\", given as a query parameter of the"
+" streaming API."
+msgstr ""
+
+msgid "Results can be appear in different order, like:"
+msgstr ""
+
+msgid ""
+"    {\"targets\":{\"key\":\"new place 1\"}}\n"
+"    {\"targets\":{\"key\":\"new place 0\"}}\n"
+"    {\"targets\":{\"key\":\"new place 2\"}}"
+msgstr ""
+
+msgid "because \"feed\"s are processed in multiple workers asynchronously."
+msgstr ""

  Added: ja/reference/1.1.0/catalog/index.md (+20 -0) 100644
===================================================================
--- /dev/null
+++ ja/reference/1.1.0/catalog/index.md    2014-11-30 23:20:40 +0900 (9a32db4)
@@ -0,0 +1,20 @@
+---
+title: カタログ
+layout: ja
+---
+
+{% comment %}
+##############################################
+  THIS FILE IS AUTOMATICALLY GENERATED FROM
+  "_po/ja/reference/1.1.0/catalog/index.po"
+  DO NOT EDIT THIS FILE MANUALLY!
+##############################################
+{% endcomment %}
+
+
+複数のリソースが集まり、Droongaネットワークを構成します。それらのリソースを **カタログ** に記述しなければいけません。ネットワーク上のすべてのノードは同じカタログを共有します。
+
+カタログの指定はバージョン管理されています。利用可能なバージョンは以下のとおりです:
+
+ * [version 2](version2/)
+ * [version 1](version1/): (It is deprecated since 1.0.0.)

  Added: ja/reference/1.1.0/catalog/version1/index.md (+303 -0) 100644
===================================================================
--- /dev/null
+++ ja/reference/1.1.0/catalog/version1/index.md    2014-11-30 23:20:40 +0900 (1c2b04c)
@@ -0,0 +1,303 @@
+---
+title: カタログ
+layout: ja
+---
+
+{% comment %}
+##############################################
+  THIS FILE IS AUTOMATICALLY GENERATED FROM
+  "_po/ja/reference/1.1.0/catalog/version1/index.po"
+  DO NOT EDIT THIS FILE MANUALLY!
+##############################################
+{% endcomment %}
+
+
+複数のリソースが集まり、Droongaネットワークを構成します。それらのリソースを **カタログ** に記述しなければいけません。ネットワーク上のすべてのノードは同じカタログを共有します。
+
+このドキュメントはカタログについて説明します。
+
+ * TOC
+{:toc}
+
+## 管理方法
+
+今のところ、カタログを書くことも書いたカタログをすべてのノードで共有することも手動で行う必要があります。
+
+近い将来、カタログを生成するユーティリティープログラムを提供する予定です。さらにその後、Droongaネットワークは自動でカタログの管理やカタログの配布を行うようになる予定です。
+
+## 用語集
+
+このセクションではカタログに出てくる用語を説明します。
+
+### カタログ
+
+カタログはネットワーク内のリソースを表現するデータの集まりです。
+
+### ゾーン
+
+ゾーンはファームの集まりです。同じゾーン内のファームはお互いに近くに配置することが期待されています。例えば、同じホスト内、同じスイッチ内、同じネットワーク内といった具合です。
+
+### ファーム
+
+ファームはDroongaエンジンのインスタンスです。Droongaエンジンは[Fluentd][]のプラグインfluent-plugin-droongaとして実装されています。
+
+1つの `fluentd` プロセスは複数のDroongaエンジンを持てます。 `fluentd.conf` に `droonga` タイプの `match` エントリーを1つ以上追加すると、 `fluentd` プロセスは同じ数のDroongaエンジンのインスタンスを生成します。
+
+ファームは複数のワーカーと1つのジョブキューを持ちます。ファームはリクエストをジョブキューに投入します。ワーカーはジョブキューからリクエストを取り出します。
+
+### データセット
+
+データセットは論理テーブルの集まりです。論理テーブルは1つのデータセットに所属しなければいけません。
+
+各データセットの名前は同じDroongaネットワーク内で重複してはいけません。
+
+### 論理テーブル
+
+論理テーブルはパーティションされた1つ以上の物理テーブルで構成されます。論理テーブルは物理レコードを持ちません。物理テーブルから物理レコードを返します。
+
+1つの論理テーブルを1つ以上の物理テーブルにどうやってパーティションするかをカスタマイズできます。例えば、パーティションキーやパーテション数をカスタマイズできます。
+
+### 物理テーブル
+
+物理テーブルはGroongaデータベースのテーブルです。Groongaのテーブルに物理レコードを保存します。
+
+### リング
+
+リングはパーティションセットの集まりです。データセットは必ず1つのリングを持ちます。データセットはリング上に論理テーブルを作ります。
+
+Droongaエンジンは物理テーブル上にあるレコードを1つ以上のパーティションセットに複製します。
+
+### パーティションセット
+
+パーティションセットはパーティションの集まりです。1つのパーティションセットは同じDroongaネットワーク内のすべての論理テーブルのすべてのレコードを保存します。言い換えると、データセットは1つのパーティションセットの中でパーティションされます。
+
+1つのパーティションセットは他のパーティションセットの複製です。
+
+将来、Droongaエンジンは1つ以上のパーティションセット内でのパーティションをサポートするかもしれません。古いデータと新しいデータで異なるパーティションサイズを使うことができるので便利でしょう。通常、古いデータは小さく、新しいデータは大きくなります。大きなデータに大きなパーティションサイズを使うのは妥当なことです。
+
+### パーティション
+
+1つのパーティションは1つのGroongaデータベースに対応します。0個以上の物理テーブルを持ちます。
+
+### プラグイン
+
+Droonga Engine can be extended by writing plugin scripts.
+In most cases, a series of plugins work cooperatively to
+achieve required behaviors.
+So, plugins are organized by behaviors.
+Each behavior can be attached to datasets and/or tables by
+adding "plugins" section to the corresponding entry in the catalog.
+
+More than one plugin can be assigned in a "plugins" section as an array.
+The order in the array controls the execution order of plugins
+when adapting messages.
+When adapting an incoming message, plugins are applied in forward order
+whereas those are applied in reverse order when adapting an outgoing message.
+
+## 例
+
+Consider the following case:
+
+ * There are two farms.
+ * All farms (Droonga Engine instances) works on the same fluentd.
+ * Each farm has two partitions.
+ * There are two replicas.
+ * There are two partitions for each table.
+
+Catalog is written as a JSON file. Its file name is `catalog.json`.
+
+Here is a `catalog.json` for the above case:
+
+~~~json
+{
+  "version": 1,
+  "effective_date": "2013-06-05T00:05:51Z",
+  "zones": ["localhost:23003/farm0", "localhost:23003/farm1"],
+  "farms": {
+    "localhost:23003/farm0": {
+      "device": "disk0",
+      "capacity": 1024
+    },
+    "localhost:23003/farm1": {
+      "device": "disk1",
+      "capacity": 1024
+    }
+  },
+  "datasets": {
+    "Wiki": {
+      "workers": 4,
+      "plugins": ["groonga", "crud", "search"],
+      "number_of_replicas": 2,
+      "number_of_partitions": 2,
+      "partition_key": "_key",
+      "date_range": "infinity",
+      "ring": {
+        "localhost:23004": {
+          "weight": 10,
+          "partitions": {
+            "2013-07-24": [
+              "localhost:23003/farm0.000",
+              "localhost:23003/farm1.000"
+            ]
+          }
+        },
+        "localhost:23005": {
+          "weight": 10,
+          "partitions": {
+            "2013-07-24": [
+              "localhost:23003/farm1.001",
+              "localhost:23003/farm0.001"
+            ]
+          }
+        }
+      }
+    }
+  }
+}
+~~~
+
+## Parameters
+
+Here are descriptions about parameters in `catalog.json`.
+
+### `version` {#version}
+
+It is a format version of the catalog file.
+
+Droonga Engine will change `catalog.json` format in the
+future. Droonga Engine can provide auto format update feature with the
+information.
+
+The value must be `1`.
+
+This is a required parameter.
+
+Example:
+
+~~~json
+{
+  "version": 1
+}
+~~~
+
+### `effective_date`
+
+It is a date string representing the day the catalog becomes
+effective.
+
+The date string format must be [W3C-DTF][].
+
+This is a required parameter.
+
+Note: fluent-plugin-droonga 0.8.0 doesn't use this value yet.
+
+Example:
+
+~~~json
+{
+  "effective_date": "2013-11-29T11:29:29Z"
+}
+~~~
+
+### `zones`
+
+`Zones` is an array to express proximities between farms.
+Farms are grouped by a zone, and zones can be grouped by another zone recursively.
+Zones make a single tree structure, expressed by nested arrays.
+Farms in a same branch are regarded as relatively closer than other farms.
+
+e.g.
+
+When the value of `zones` is as follows,
+
+```
+[["A", ["B", "C"]], "D"]
+```
+
+it expresses the following tree.
+
+       /\
+      /\ D
+     A /\
+      B  C
+
+This tree means the farm "B" and "C" are closer than "A" or "D" to each other.
+You should make elements in a `zones` close to each other, like in the
+same host, in the same switch, in the same network.
+
+This is an optional parameter.
+
+Note: fluent-plugin-droonga 0.8.0 doesn't use this value yet.
+
+Example:
+
+~~~json
+{
+  "zones": [
+    ["localhost:23003/farm0",
+     "localhost:23003/farm1"],
+    ["localhost:23004/farm0",
+     "localhost:23004/farm1"]
+  ]
+}
+~~~
+
+*TODO: Discuss about the call of this parameter. This seems completely equals to the list of keys of `farms`.*
+
+### `farms`
+
+It is an array of Droonga Engine instances.
+
+*TODO: Improve me. For example, we have to describe relations of nested farms, ex. `children`.*
+
+**Farms** correspond with fluent-plugin-droonga instances. A fluentd process may have multiple **farms** if more than one **match** entry with type **droonga** appear in the "fluentd.conf".
+Each **farm** has its own job queue.
+Each **farm** can attach to a data partition which is a part of a **dataset**.
+
+This is a required parameter.
+
+Example:
+
+~~~json
+{
+  "farms": {
+    "localhost:23003/farm0": {
+      "device": "/disk0",
+      "capacity": 1024
+    },
+    "localhost:23003/farm1": {
+      "device": "/disk1",
+      "capacity": 1024
+    }
+  }
+}
+~~~
+
+### `datasets`
+
+A **dataset** is a set of **tables** which comprise a single logical **table** virtually.
+Each **dataset** must have a unique name in the network.
+
+### `ring`
+
+`ring` is a series of partitions which comprise a dataset. `replica_count`, `number_of_partitons` and **time-slice** factors affect the number of partitions in a `ring`.
+
+### `workers`
+
+`workers` is an integer number which specifies the number of worker processes to deal with the dataset.
+If `0` is specified, no worker is forked and all operations are done in the master process.
+
+### `number_of_partitions`
+
+`number_of_partition` is an integer number which represents the number of partitions divided by the hash function. The hash function which determines where each record resides the partition in a dataset is compatible with memcached.
+
+### `date_range`
+
+`date_range` determines when to split the dataset. If a string "infinity" is assigned, dataset is never split by time factor.
+
+### `number_of_replicas`
+
+`number_of_replicas` represents the number of replicas of dataset maintained in the network.
+
+  [Fluentd]: http://fluentd.org/
+  [W3C-DTF]: http://www.w3.org/TR/NOTE-datetime "Date and Time Formats"

  Added: ja/reference/1.1.0/catalog/version2/index.md (+858 -0) 100644
===================================================================
--- /dev/null
+++ ja/reference/1.1.0/catalog/version2/index.md    2014-11-30 23:20:40 +0900 (c67b671)
@@ -0,0 +1,858 @@
+---
+title: カタログ
+layout: ja
+---
+
+{% comment %}
+##############################################
+  THIS FILE IS AUTOMATICALLY GENERATED FROM
+  "_po/ja/reference/1.1.0/catalog/version2/index.po"
+  DO NOT EDIT THIS FILE MANUALLY!
+##############################################
+{% endcomment %}
+
+
+* TOC
+{:toc}
+
+## 概要 {#abstract}
+
+`Catalog`はDroongaクラスタの設定を管理するためのJSONデータです。Droongaクラスタは1つ以上の`datasets`からなり、`dataset`はその他の部分からなります。それらは全て`catalog`に記述し、クラスタ内の全てホストで共有しなければなりません。
+
+## 使い方 {#usage}
+
+この [`version`](#paramter-version) の `catalog` は Droonga 1.0.0 以降で有効です。
+
+## 書式 {#syntax}
+
+    {
+      "version": <Version number>,
+      "effectiveDate": "<Effective date>",
+      "datasets": {
+        "<Name of the dataset 1>": {
+          "nWorkers": <Number of workers>,
+          "plugins": [
+            "Name of the plugin 1",
+            ...
+          ],
+          "schema": {
+            "<Name of the table 1>": {
+              "type"             : <"Array", "Hash", "PatriciaTrie" or "DoubleArrayTrie">
+              "keyType"          : "<Type of the primary key>",
+              "tokenizer"        : "<Tokenizer>",
+              "normalizer"       : "<Normalizer>",
+              "columns" : {
+                "<Name of the column 1>": {
+                  "type"         : <"Scalar", "Vector" or "Index">,
+                  "valueType"    : "<Type of the value>",
+                  "vectorOptions": {
+                    "weight"     : <Weight>,
+                  },
+                  "indexOptions" : {
+                    "section"    : <Section>,
+                    "weight"     : <Weight>,
+                    "position"   : <Position>,
+                    "sources"    : [
+                      "<Name of a column to be indexed>",
+                      ...
+                    ]
+                  }
+                },
+                "<Name of the column 2>": { ... },
+                ...
+              }
+            },
+            "<Name of the table 2>": { ... },
+            ...
+          },
+          "fact": "<Name of the fact table>",
+          "replicas": [
+            {
+              "dimension": "<Name of the dimension column>",
+              "slicer": "<Name of the slicer function>",
+              "slices": [
+                {
+                  "label": "<Label of the slice>",
+                  "volume": {
+                    "address": "<Address string of the volume>"
+                  }
+                },
+                ...
+              }
+            },
+            ...
+          ]
+        },
+        "<Name of the dataset 2>": { ... },
+        ...
+      }
+    }
+
+## 詳細 {#details}
+
+### Catalog 定義 {#catalog}
+
+値
+: 以下のキーと値のペアを持つオブジェクト。
+
+#### パラメータ
+
+##### `version` {#parameter-version}
+
+概要
+: カタログファイルのバージョン番号。
+
+値
+: `2`. (このページに記述されている仕様はこの値が`2`のときのみ有効です)
+
+既定値
+: なし。これは必須のパラメータです。
+
+継承可能性
+: 不可。
+
+##### `effectiveDate` {#parameter-effective_date}
+
+概要
+: このカタログが有効になる時刻。
+
+値
+: [W3C-DTF](http://www.w3.org/TR/NOTE-datetime "Date and Time Formats") でフォーマットされたタイムゾーン付きの時刻。
+
+既定値
+: なし。これは必須のパラメータです。
+
+継承可能性
+: 不可。
+
+##### `datasets` {#parameter-datasets}
+
+概要
+: データセットの定義。
+
+: データセット名をキーとし、[`dataset` 定義](#dataset) を値とするオブジェクト。
+
+既定値
+: なし。これは必須のパラメータです。
+
+継承可能性
+: 不可。
+
+##### `nWorkers` {#parameter-n_workers}
+
+概要
+: データベースインスタンス毎にspawnされるワーカの数。
+
+値
+: 整数。
+
+既定値
+: 0 (ワーカー無し。全ての処理がマスタープロセス内で行われます)
+
+継承可能性
+: 可。`dataset`と`volume`の定義でオーバライドできます。
+
+
+#### 例
+
+A version 2 catalog effective after `2013-09-01T00:00:00Z`, with no datasets:
+
+~~~
+{
+  "version": 2,
+  "effectiveDate": "2013-09-01T00:00:00Z",
+  "datasets": {
+  }
+}
+~~~
+
+### Dataset 定義 {#dataset}
+
+値
+: 以下のキーと値のペアを持つオブジェクト。
+
+#### パラメータ
+
+##### `plugins` {#parameter-plugins}
+
+概要
+: このデータセットにおいて有効にするプラグイン名文字列の配列。
+
+値
+: 文字列の配列。
+
+既定値
+: なし。これは必須のパラメータです。
+
+継承可能性
+: 可。`dataset`と`volume`の定義でオーバライドできます。
+
+##### `schema` {#parameter-schema}
+
+概要
+: テーブルとそのカラムの定義。
+
+値
+: テーブル名をキーとし、[`table` 定義](#table)を値とするオブジェクト。
+
+既定値
+: なし。これは必須のパラメータです。
+
+継承可能性
+: 可。`dataset`と`volume`の定義でオーバライドできます。
+
+##### `fact` {#parameter-fact}
+
+概要
+: fact テーブルの名前。`dataset`が複数の`slice`に格納される場合、[`schema`](#parameter-schema)パラメータで定義されたテーブルの中から、1つ[fact table](http://en.wikipedia.org/wiki/Fact_table)を選択する必要があります。
+
+値
+: 文字列。
+
+既定値
+: なし。
+
+継承可能性
+: 可。`dataset`と`volume`の定義でオーバライドできます。
+
+##### `replicas` {#parameter-replicas}
+
+概要
+: 互いに複製されるボリュームの集合。
+
+値
+: [`volume` 定義](#volume)の配列。
+
+既定値
+: なし。これは必須のパラメータです。
+
+継承可能性
+: 不可。
+
+#### 例
+
+データベースインスタンスに1つにつき4ワーカーを持ち、プラグイン`groonga`、`crud`、`search`を使用するデータセット:
+
+~~~
+{
+  "nWorkers": 4,
+  "plugins": ["groonga", "crud", "search"],
+  "schema": {
+  },
+  "replicas": [
+  ]
+}
+~~~
+
+### Table 定義 {#table}
+
+値
+: 以下のキーと値のペアを持つオブジェクト。
+
+#### パラメータ
+
+##### `type` {#parameter-table-type}
+
+概要
+: テーブルのキーを管理するためのデータ構造を指定する。
+
+値
+: 以下のうちいずれかの値。
+
+* `"Array"`: キーの無いテーブル
+* `"Hash"`: ハッシュテーブル
+* `"PatriciaTrie"`: パトリシアトライテーブル
+* `"DoubleArrayTrie"`: ダブル配列トライテーブル
+
+既定値
+: `"Hash"`
+
+継承可能性
+: 不可。
+
+##### `keyType` {#parameter-keyType}
+
+概要
+: テーブルにおけるキーのデータ型。`type`が`"Array"`の場合は指定してはいけません。
+
+値
+: 以下のデータ型のうちのいずれか。
+
+* `"Integer"`       : 64bit 符号付き整数。
+* `"Float"`         : 64bit 浮動小数点数。
+* `"Time"`          : マイクロ秒精度の時刻。
+* `"ShortText"`     : 4095バイトまでの文字列。
+* `"TokyoGeoPoint"` : 旧日本測地系による経緯度。
+* `"WGS84GeoPoint"` : [WGS84](http://en.wikipedia.org/wiki/World_Geodetic_System) による経緯度。
+
+既定値
+: なし。キーを持つテーブルでは指定が必須です。
+
+継承可能性
+: 不可。
+
+##### `tokenizer` {#parameter-tokenizer}
+
+概要
+: 語彙表として使われるテーブルにおける、文字列型の値を分割するために使うトークナイザーの種類を指定します。`keyType`が`"ShortText"`である場合にのみ有効です。
+
+値
+: 以下のトークナイザー名のうちのいずれか。
+
+* `"TokenDelimit"`
+* `"TokenUnigram"`
+* `"TokenBigram"`
+* `"TokenTrigram"`
+* `"TokenBigramSplitSymbol"`
+* `"TokenBigramSplitSymbolAlpha"`
+* `"TokenBigramSplitSymbolAlphaDigit"`
+* `"TokenBigramIgnoreBlank"`
+* `"TokenBigramIgnoreBlankSplitSymbol"`
+* `"TokenBigramIgnoreBlankSplitSymbolAlpha"`
+* `"TokenBigramIgnoreBlankSplitSymbolAlphaDigit"`
+* `"TokenDelimitNull"`
+
+既定値
+: なし。
+
+継承可能性
+: 不可。
+
+##### `normalizer` {#parameter-normalizer}
+
+概要
+: キーの値を正規化・制限するノーマライザーの種類を指定します。`keyType`が`"ShortText"`である場合にのみ有効です。
+
+値
+: 以下のノーマライザー名のうちのいずれか。
+
+* `"NormalizerAuto"`
+* `"NormalizerNFKC51"`
+
+既定値
+: なし。
+
+継承可能性
+: 不可。
+
+##### `columns` {#parameter-columns}
+
+概要
+: テーブルのカラムの定義。
+
+Value
+: An object keyed by the name of the column with value the [`column` definition](#column).
+
+既定値
+: なし。
+
+継承可能性
+: 不可。
+
+#### 例
+
+##### 例1: Hashテーブル
+
+`ShortText`型のキーを持つ`Hash`テーブルで、カラムは無いもの:
+
+~~~
+{
+  "type": "Hash",
+  "keyType": "ShortText",
+  "columns": {
+  }
+}
+~~~
+
+##### 例2: PatriciaTrieテーブル
+
+`TokenBigram`トークナイザと`NormalizerAuto`ノーマライザを利用する`PatriciaTrie`テーブル
+
+~~~
+{
+  "type": "PatriciaTrie",
+  "keyType": "ShortText",
+  "tokenizer": "TokenBigram",
+  "normalizer": "NormalizerAuto",
+  "columns": {
+  }
+}
+~~~
+
+### Column 定義 {#column}
+
+値
+
+: An object with the following key/value pairs.
+
+#### パラメータ
+
+##### `type` {#parameter-column-type}
+
+Abstract
+: Specifies the quantity of data stored as each column value.
+
+Value
+: Any of the followings.
+
+* `"Scalar"`: A single value.
+* `"Vector"`: A list of values.
+* `"Index"` : A set of unique values with additional properties respectively. Properties can be specified in [`indexOptions`](#parameter-indexOptions).
+
+Default value
+: `"Scalar"`
+
+継承可能性
+: 不可。
+
+##### `valueType` {#parameter-valueType}
+
+Abstract
+: Data type of the column value.
+
+Value
+: Any of the following data types or the name of another table defined in the same dataset. When a table name is assigned, the column acts as a foreign key references the table.
+
+* `"Bool"`          : `true` or `false`.
+* `"Integer"`       : 64bit signed integer.
+* `"Float"`         : 64bit floating-point number.
+* `"Time"`          : Time value with microseconds resolution.
+* `"ShortText"`     : Text value up to 4,095 bytes length.
+* `"Text"`          : Text value up to 2,147,483,647 bytes length.
+* `"TokyoGeoPoint"` : Tokyo Datum based geometric point value.
+* `"WGS84GeoPoint"` : [WGS84](http://en.wikipedia.org/wiki/World_Geodetic_System) based geometric point value.
+
+既定値
+: なし。これは必須のパラメータです。
+
+継承可能性
+: 不可。
+
+##### `vectorOptions` {#parameter-vectorOptions}
+
+概要
+: データベースインスタンスの場所を指定します。
+
+Value
+: An object which is a [`vectorOptions` definition](#vectorOptions)
+
+Default value
+: `{}` (Void object).
+
+継承可能性
+: 不可。
+
+##### `indexOptions` {#parameter-indexOptions}
+
+概要
+: データベースインスタンスの場所を指定します。
+
+Value
+: An object which is an [`indexOptions` definition](#indexOptions)
+
+Default value
+: `{}` (Void object).
+
+継承可能性
+: 不可。
+
+#### 例
+
+##### 例1: スカラー型カラム
+
+`ShortText`を格納するスカラー型のカラム:
+
+~~~
+{
+  "type": "Scalar",
+  "valueType": "ShortText"
+}
+~~~
+
+##### 例2: ベクター型カラム
+
+A vector column to store `ShortText` values with weight:
+
+~~~
+{
+  "type": "Scalar",
+  "valueType": "ShortText",
+  "vectorOptions": {
+    "weight": true
+  }
+}
+~~~
+
+##### 例3: インデックスカラム
+
+`Store`テーブルの`address`カラムをインデックスするカラム:
+
+~~~
+{
+  "type": "Index",
+  "valueType": "Store",
+  "indexOptions": {
+    "sources": [
+      "address"
+    ]
+  }
+}
+~~~
+
+### vectorOptions 定義 {#vectorOptions}
+
+値
+: 以下のキーと値のペアを持つオブジェクト。
+
+#### パラメータ
+
+##### `weight` {#parameter-vectorOptions-weight}
+
+Abstract
+: Specifies whether the vector column stores the weight data or not. Weight data is used for indicating the importance of the value.
+
+Value
+: A boolean value (`true` or `false`).
+
+Default value
+: `false`.
+
+継承可能性
+: 不可。
+
+#### 例
+
+Store the weight data.
+
+~~~
+{
+  "weight": true
+}
+~~~
+
+### indexOptions 定義 {#indexOptions}
+
+値
+: 以下のキーと値のペアを持つオブジェクト。
+
+#### パラメータ
+
+##### `section` {#parameter-indexOptions-section}
+
+Abstract
+: Specifies whether the index column stores the section data or not. Section data is typically used for distinguishing in which part of the sources the value appears.
+
+Value
+: A boolean value (`true` or `false`).
+
+Default value
+: `false`.
+
+継承可能性
+: 不可。
+
+##### `weight` {#parameter-indexOptions-weight}
+
+Abstract
+: Specifies whether the index column stores the weight data or not. Weight data is used for indicating the importance of the value in the sources.
+
+Value
+: A boolean value (`true` or `false`).
+
+Default value
+: `false`.
+
+継承可能性
+: 不可。
+
+##### `position` {#parameter-indexOptions-position}
+
+Abstract
+: Specifies whether the index column stores the position data or not. Position data is used for specifying the position where the value appears in the sources. It is indispensable for fast and accurate phrase-search.
+
+Value
+: A boolean value (`true` or `false`).
+
+Default value
+: `false`.
+
+継承可能性
+: 不可。
+
+##### `sources` {#parameter-indexOptions-sources}
+
+Abstract
+: Makes the column an inverted index of the referencing table's columns.
+
+Value
+: An array of column names of the referencing table assigned as [`valueType`](#parameter-valueType).
+
+既定値
+: なし。
+
+継承可能性
+: 不可。
+
+#### 例
+
+Store the section data, the weight data and the position data.
+Index `name` and `address` on the referencing table.
+
+~~~
+{
+  "section": true,
+  "weight": true,
+  "position": true
+  "sources": [
+    "name",
+    "address"
+  ]
+}
+~~~
+
+### Volume 定義 {#volume}
+
+概要
+: データセットを構成する単位。データセットは1つ、もしくは複数のボリュームからなります。ボリュームは単一のデータベースインスタンスか、`slices` の集合で構成されます。ボリュームが単一のデータベースインスタンスから構成される場合は、`address`パラメータを指定しなければなりません。このとき、それ以外のパラメータを指定してはいけません。そうでない場合は、`dimension`と`slicer`と`slices`が必須で、他は指定してはいけません。
+
+値
+: 以下のキーと値のペアを持つオブジェクト。
+
+#### パラメータ
+
+##### `address` {#parameter-address}
+
+概要
+: データベースインスタンスの場所を指定します。
+
+値
+: 以下の書式の文字列。
+
+      ${host_name}:${port_number}/${tag}.${name}
+
+  * `host_name`: データベースのインスタンスを保持するホストの名前。
+  * `port_number`: データベースのインスタンスのためのポート番号。
+  * `tag`: データベースのインスタンスのタグ名。タグ名には`.`を含めることはできません。ホスト名とポート番号のペアごとに、複数のタグを使うことができます。
+  * `name`: データベースのインスタンスの名前。あるホスト名・ポート番号・タグ名の3つの組み合わせごとに、複数のインスタンス名を使うことができます。
+
+既定値
+: なし。
+
+継承可能性
+: 不可。
+
+##### `dimension` {#parameter-dimension}
+
+概要
+: fact表の中でレコードをスライスする次元を指定します。fact表の'_key'または[`columns`](#parameter-columns)からスカラー型のカラムを選択します。[dimension](http://en.wikipedia.org/wiki/Dimension_%28data_warehouse%29)を参照してください。
+
+値
+: 文字列。
+
+既定値
+: `"_key"`
+
+継承可能性
+: 可。`dataset`と`volume`の定義でオーバライドできます。
+
+##### `slicer` {#parameter-slicer}
+
+概要
+: dimensionカラムをsliceする関数。
+
+値
+: スライサー関数の名前。
+
+既定値
+: `"hash"`
+
+継承可能性
+: 可。`dataset`と`volume`の定義でオーバライドできます。
+
+`slices`の集合からなるボリュームを定義するためには、レコードを複数のスライスに振り分けるための方法を決める必要があります。
+
+`slice`で指定されたスライサー関数と、スライサー関数への入力として与えられる`dimension`で指定されたカラム(またはキー)によって、それが決まります。
+
+スライサーは以下の3種類に分けられます:
+
+比例尺度
+: *比例尺度スライサー*は、個々のデータを指定された比率で、_keyのハッシュ値などに基づいて振り分けます。
+  この種類のスライサー:
+  
+  * `hash`
+
+順序尺度
+: *順序尺度スライサー*は、個々のデータを順序のある値(時間、整数、`{High, Middle, Low}`など)に基づいて振り分けます。
+  この種類のスライサー:
+  
+  * (未実装)
+
+名義尺度
+: *名義尺度スライサー*は、個々のデータをカテゴリを示す名義(国名、郵便番号、色など)で振り分けます。
+  この種類のスライサー:
+  
+  * (未実装)
+
+##### `slices` {#parameter-slices}
+
+概要
+: データを格納するスライスの定義。
+
+値
+: [`slice` 定義](#slice)の配列。
+
+既定値
+: なし。
+
+継承可能性
+: 不可。
+
+#### 例
+
+##### 例1: 単一のインスタンス
+
+"localhost:24224/volume.000"にあるボリューム:
+
+~~~
+{
+  "address": "localhost:24224/volume.000"
+}
+~~~
+
+##### Example 2: 複数のスライス
+
+3つのスライスから構成され、`_key`に対してratio-scaledなスライサー関数`hash`を適用してレコードを分散させるボリューム
+
+~~~
+{
+  "dimension": "_key",
+  "slicer": "hash",
+  "slices": [
+    {
+      "volume": {
+        "address": "localhost:24224/volume.000"
+      }
+    },
+    {
+      "volume": {
+        "address": "localhost:24224/volume.001"
+      }
+    },
+    {
+      "volume": {
+        "address": "localhost:24224/volume.002"
+      }
+    }
+  ]
+~~~
+
+### Slice 定義 {#slice}
+
+概要
+: スライスの定義。スライスされたデータの範囲と、それを保存するボリュームを指定する。
+
+値
+: 以下のキーと値のペアを持つオブジェクト。
+
+#### パラメータ
+
+##### `weight` {#parameter-slice-weight}
+
+概要
+: スライス内での割り当て量を指定します。`slicer`が atio-scaledの場合のみ有効。
+
+値
+: 数値。
+
+既定値
+: `1`
+
+継承可能性
+: 不可。
+
+##### `label` {#parameter-label}
+
+概要
+: slicer が返す具体的な値。 slicerがnominal-scaledの場合のみ有効。
+
+Value
+: A value of the dimension column data type. When the value is not provided, this slice is regarded as *else*; matched only if all other labels are not matched. Therefore, only one slice without `label` is allowed in slices.
+
+既定値
+: なし。
+
+継承可能性
+: 不可。
+
+##### `boundary` {#parameter-boundary}
+
+概要
+: `slicer`の返す値と比較可能な具体的な値。`slicer`がordinal-scaledの場合のみ有効。
+
+Value
+: A value of the dimension column data type. When the value is not provided, this slice is regarded as *else*; this means the slice is open-ended. Therefore, only one slice without `boundary` is allowed in a slices.
+
+既定値
+: なし。
+
+継承可能性
+: 不可。
+
+##### `volume` {#parameter-volume}
+
+概要
+: スライスに対応するデータを格納するボリューム。
+
+値
+
+: [`volume` 定義](#volume)オブジェクト
+
+既定値
+: なし。
+
+継承可能性
+: 不可。
+
+#### 例
+
+##### 例1: Ratio-scaled
+
+ratio-scaledなスライサーのためのスライス、重みは`1`
+
+~~~
+{
+  "weight": 1,
+  "volume": {
+  }
+}
+~~~
+
+##### 例2: Nominal-scaled
+
+nominal-scaledなスライサーのためのスライス、ラベルは `"1"`
+
+~~~
+{
+  "label": "1",
+  "volume": {
+  }
+}
+~~~
+
+##### 例3: Ordinal-scaled
+
+ordinal-scaledなスライサーに対するスライス、境界値は`100`:
+
+~~~
+{
+  "boundary": 100,
+  "volume": {
+  }
+}
+~~~
+
+## 実際の例
+
+[基本的な使い方のチュートリアル][basic tutorial]に登場するカタログを参照してください。
+
+  [basic tutorial]: ../../../tutorial/basic

  Added: ja/reference/1.1.0/commands/add/index.md (+262 -0) 100644
===================================================================
--- /dev/null
+++ ja/reference/1.1.0/commands/add/index.md    2014-11-30 23:20:40 +0900 (ff14783)
@@ -0,0 +1,262 @@
+---
+title: add
+layout: ja
+---
+
+{% comment %}
+##############################################
+  THIS FILE IS AUTOMATICALLY GENERATED FROM
+  "_po/ja/reference/1.1.0/commands/add/index.po"
+  DO NOT EDIT THIS FILE MANUALLY!
+##############################################
+{% endcomment %}
+
+
+* TOC
+{:toc}
+
+## 概要 {#abstract}
+
+`add` は、テーブルにレコードを登録します。対象のテーブルが主キーを持っており、同じキーのレコードが既に存在している場合には、既存レコードのカラムの値を更新します。
+
+## APIの形式 {#api-types}
+
+### HTTP {#api-types-http}
+
+リクエスト先
+: `(ドキュメントルート)/droonga/add`
+
+リクエストメソッド
+: `POST`
+
+リクエストのURLパラメータ
+: なし。
+
+リクエストのbody
+: [パラメータ](#parameters)のハッシュ。
+
+レスポンスのbody
+: [レスポンスメッセージ](#response)。
+
+### REST {#api-types-rest}
+
+対応していません。
+
+### Fluentd {#api-types-fluentd}
+
+形式
+: Request-Response型。コマンドに対しては必ず対応するレスポンスが返されます。
+
+リクエストの `type`
+: `add`
+
+リクエストの `body`
+: [パラメータ](#parameters)のハッシュ。
+
+レスポンスの `type`
+: `add.result`
+
+## パラメータの構文 {#syntax}
+
+対象のテーブルが主キーを持つ場合:
+
+    {
+      "table"  : "<テーブル名>",
+      "key"    : "<レコードの主キー>",
+      "values" : {
+        "<カラム1の名前>" : <値1>,
+        "<カラム2の名前>" : <値2>,
+        ...
+      }
+    }
+
+対象のテーブルが主キーを持たない場合:
+
+    {
+      "table"  : "<テーブル名>",
+      "values" : {
+        "<カラム1の名前>" : <値1>,
+        "<カラム2の名前>" : <値2>,
+        ...
+      }
+    }
+
+## 使い方 {#usage}
+
+本項の説明では以下のような2つのテーブルが存在している事を前提として、典型的な使い方を通じて `add` コマンドの使い方を説明します。
+
+Personテーブル(主キー無し):
+
+|name|job (Jobテーブルを参照)|
+|Alice Arnold|announcer|
+|Alice Cooper|musician|
+
+Jobテーブル(主キー有り):
+
+|_key|label|
+|announcer|announcer|
+|musician|musician|
+
+
+### 主キーを持たないテーブルにレコードを追加する {#adding-record-to-table-without-key}
+
+主キーを持たないテーブルにレコードを追加する場合は、 `key` を指定せずに `table` と `values` だけを指定します。
+
+    {
+      "type" : "add",
+      "body" : {
+        "table"  : "Person",
+        "values" : {
+          "name" : "Bob Dylan",
+          "job"  : "musician"
+        }
+      }
+    }
+    
+    => {
+         "type" : "add.result",
+         "body" : true
+       }
+
+`add` は再帰的に動作します。別のテーブルを参照しているカラムについて、参照先のテーブルに存在しない値を指定した場合、エラーにはならず、参照先のテーブルにも同時に新しいレコードが追加されます。例えば、以下は  テーブルに存在しない主キー `doctor` を伴って Person テーブルにレコードを追加します。
+
+    {
+      "type" : "add",
+      "body" : {
+        "table"  : "Person",
+        "values" : {
+          "name" : "Alice Miller",
+          "job"  : "doctor"
+        }
+      }
+    }
+    
+    => {
+         "type" : "add.result",
+         "body" : true
+       }
+
+この時、Jobテーブルには主キーだけが指定された新しいレコードが自動的に追加されます。
+
+|_key|label|
+|announcer|announcer|
+|musician|musician|
+|doctor|(空文字)|
+
+
+### 主キーを持つテーブルにレコードを追加する {#adding-record-to-table-with-key}
+
+主キーを持つテーブルにレコードを追加する場合は、 `table`、`key`、`values` のすべてを指定します。
+
+    {
+      "type" : "add",
+      "body" : {
+        "table"  : "Job",
+        "key"    : "writer",
+        "values" : {
+          "label" : "writer"
+        }
+      }
+    }
+    
+    => {
+         "type" : "add.result",
+         "body" : true
+       }
+
+### 既存レコードのカラムの値を更新する {#updating}
+
+主キーを持つテーブルに対する、既存レコードの主キーを伴う `add` コマンドは、既存レコードのカラムの値の更新操作と見なされます。
+
+    {
+      "type" : "add",
+      "body" : {
+        "table"  : "Job",
+        "key"    : "doctor",
+        "values" : {
+          "label" : "doctor"
+        }
+      }
+    }
+    
+    => {
+         "type" : "add.result",
+         "body" : true
+       }
+
+
+主キーを持たないテーブルのレコードに対しては、値の更新操作はできません(常にレコードの追加と見なされます)。
+
+
+## パラメータの詳細 {#parameters}
+
+### `table` {#parameter-table}
+
+概要
+: レコードを登録するテーブルの名前を指定します。
+
+値
+: テーブル名の文字列。
+
+省略時の既定値
+: なし。このパラメータは必須です。
+
+### `key` {#parameter-key}
+
+概要
+: レコードの主キーを指定します。
+
+値
+: 主キーとなる文字列。
+
+省略時の初期値
+: Nなし。対象のテーブルが主キーを持つ場合、このパラメータは必須です。主キーがない場合、このパラメータは無視されます。
+
+既に同じ主キーを持つレコードが存在している場合は、レコードの各カラムの値を更新します。
+
+対象のテーブルが主キーを持たない場合は、指定しても単に無視されます。
+
+### `values` {#parameter-values}
+
+概要
+: レコードの各カラムの値を指定します。
+
+値
+: カラム名をキー、カラムの値を値としたハッシュ。
+
+省略時の初期値
+: `null`
+
+指定されなかったカラムの値は登録・更新されません。
+
+
+## レスポンス {#response}
+
+このコマンドは、レコードを正常に追加または更新できた場合、真偽値 `true` を`body` 、`200` を `statusCode` としたレスポンスを返します。以下はレスポンスの `body` の例です。
+
+    true
+
+## エラーの種類 {#errors}
+
+このコマンドは[一般的なエラー](/ja/reference/message/#error)に加えて、以下のエラーを場合に応じて返します。
+
+### `MissingTableParameter`
+
+`table` パラメータの指定を忘れていることを示します。ステータスコードは `400` です。
+
+### `MissingPrimaryKeyParameter`
+
+主キーが存在するテーブルに対して、`key` パラメータの指定を忘れていることを示します。ステータスコードは `400` です。
+
+### `InvalidValue`
+
+カラムに設定しようとした値が不正である(例:位置情報型や整数型のカラムに通常の文字列を指定した、など)事を示します。ステータスコードは `400` です。
+
+### `UnknownTable`
+
+指定されたデータセット内に、指定されたレコードが存在していない事を示します。ステータスコードは `404` です。
+
+### `UnknownColumn`
+
+指定されたカラムがテーブルに存在しない未知のカラムである事を示します。ステータスコードは `400` です。
+

  Added: ja/reference/1.1.0/commands/column-create/index.md (+110 -0) 100644
===================================================================
--- /dev/null
+++ ja/reference/1.1.0/commands/column-create/index.md    2014-11-30 23:20:40 +0900 (32df074)
@@ -0,0 +1,110 @@
+---
+title: column_create
+layout: ja
+---
+
+{% comment %}
+##############################################
+  THIS FILE IS AUTOMATICALLY GENERATED FROM
+  "_po/ja/reference/1.1.0/commands/column-create/index.po"
+  DO NOT EDIT THIS FILE MANUALLY!
+##############################################
+{% endcomment %}
+
+
+* TOC
+{:toc}
+
+## 概要 {#abstract}
+
+`column_create` は、指定したテーブルに新しいカラムを作成します。
+
+このコマンドは[Groonga の `column_create` コマンド](http://groonga.org/ja/docs/reference/commands/column_create.html)と互換性があります。
+
+## APIの形式 {#api-types}
+
+### HTTP {#api-types-http}
+
+リクエスト先
+: `(ドキュメントルート)/d/column_create`
+
+リクエストメソッド
+: `GET`
+
+リクエストのURLパラメータ
+: [パラメータの一覧](#parameters)で定義されている物を指定します。
+
+リクエストのbody
+: なし。
+
+レスポンスのbody
+: [レスポンスメッセージ](#response)。
+
+### REST {#api-types-rest}
+
+対応していません。
+
+### Fluentd {#api-types-fluentd}
+
+形式
+: Request-Response型。コマンドに対しては必ず対応するレスポンスが返されます。
+
+リクエストの `type`
+: `column_create`
+
+リクエストの `body`
+: [パラメータ](#parameters)のハッシュ。
+
+レスポンスの `type`
+: `column_create.result`
+
+## パラメータの構文 {#syntax}
+
+    {
+      "table"  : "<テーブル名>",
+      "name"   : "<カラム名>",
+      "flags"  : "<カラムの属性>",
+      "type"   : "<値の型>",
+      "source" : "<インデックス対象のカラム名>"
+    }
+
+## パラメータの詳細 {#parameters}
+
+`table`, `name` 以外のパラメータはすべて省略可能です。
+
+すべてのパラメータは[Groonga の `column_create` コマンドの引数](http://groonga.org/ja/docs/reference/commands/column_create.html#parameters)と共通です。詳細はGroongaのコマンドリファレンスを参照して下さい。
+
+## レスポンス {#response}
+
+このコマンドは、レスポンスの `body` としてコマンドの実行結果に関する情報を格納した配列を返却します。
+
+    [
+      [
+        <Groongaのステータスコード>,
+        <開始時刻>,
+        <処理に要した時間>
+      ],
+      <カラムが作成されたかどうか>
+    ]
+
+このコマンドはレスポンスの `statusCode` として常に `200` を返します。これは、Groonga互換コマンドのエラー情報はGroongaのそれと同じ形で処理される必要があるためです。
+
+レスポンスの `body` の詳細:
+
+ステータスコード
+: コマンドが正常に受け付けられたかどうかを示す整数値です。以下のいずれかの値をとります。
+  
+   * `0` (`Droonga::GroongaHandler::Status::SUCCESS`) : 正常に処理された。.
+   * `-22` (`Droonga::GroongaHandler::Status::INVALID_ARGUMENT`) : 引数が不正である。
+
+開始時刻
+: 処理を開始した時刻を示す数値(UNIX秒)。
+
+処理に要した時間
+: 処理を開始してから完了までの間にかかった時間を示す数値。
+
+カラムが作成されたかどうか
+: カラムが作成されたかどうかを示す真偽値です。以下のいずれかの値をとります。
+  
+   * `true`:カラムを作成した。
+   * `false`:カラムを作成しなかった。

  Added: ja/reference/1.1.0/commands/column-list/index.md (+103 -0) 100644
===================================================================
--- /dev/null
+++ ja/reference/1.1.0/commands/column-list/index.md    2014-11-30 23:20:40 +0900 (c0db642)
@@ -0,0 +1,103 @@
+---
+title: column_list
+layout: ja
+---
+
+{% comment %}
+##############################################
+  THIS FILE IS AUTOMATICALLY GENERATED FROM
+  "_po/ja/reference/1.1.0/commands/column-list/index.po"
+  DO NOT EDIT THIS FILE MANUALLY!
+##############################################
+{% endcomment %}
+
+
+* TOC
+{:toc}
+
+## 概要 {#abstract}
+
+The `column_list` command reports the list of all existing columns in a table.
+
+This is compatible to [the `column_list` command of the Groonga](http://groonga.org/docs/reference/commands/column_list.html).
+
+## APIの形式 {#api-types}
+
+### HTTP {#api-types-http}
+
+リクエスト先
+: `(ドキュメントルート)/d/column_list`
+
+リクエストメソッド
+: `GET`
+
+リクエストのURLパラメータ
+: [パラメータの一覧](#parameters)で定義されている物を指定します。
+
+リクエストのbody
+: なし。
+
+レスポンスのbody
+: [レスポンスメッセージ](#response)。
+
+### REST {#api-types-rest}
+
+対応していません。
+
+### Fluentd {#api-types-fluentd}
+
+形式
+: Request-Response型。コマンドに対しては必ず対応するレスポンスが返されます。
+
+リクエストの `type`
+: `column_list`
+
+リクエストの `body`
+: [パラメータ](#parameters)のハッシュ。
+
+レスポンスの `type`
+: `column_list.result`
+
+## パラメータの構文 {#syntax}
+
+    {
+      "table" : "<Name of the table>"
+    }
+
+## パラメータの詳細 {#parameters}
+
+The only one parameter `table` is required.
+
+They are compatible to [the parameters of the `column_list` command of the Groonga](http://groonga.org/docs/reference/commands/column_list.html#parameters). See the linked document for more details.
+
+## レスポンス {#response}
+
+このコマンドは、レスポンスの `body` としてコマンドの実行結果に関する情報を格納した配列を返却します。
+
+    [
+      [
+        <Groonga's status code>,
+        <Start time>,
+        <Elapsed time>
+      ],
+      <List of columns>
+    ]
+
+The structure of the returned array is compatible to [the returned value of the Groonga's `table_list` command](http://groonga.org/docs/reference/commands/column_list.html#return-value). See the linked document for more details.
+
+このコマンドはレスポンスの `statusCode` として常に `200` を返します。これは、Groonga互換コマンドのエラー情報はGroongaのそれと同じ形で処理される必要があるためです。
+
+レスポンスの `body` の詳細:
+
+ステータスコード
+: コマンドが正常に受け付けられたかどうかを示す整数値です。以下のいずれかの値をとります。
+  
+   * `0` (`Droonga::GroongaHandler::Status::SUCCESS`) : 正常に処理された。.
+   * `-22` (`Droonga::GroongaHandler::Status::INVALID_ARGUMENT`) : 引数が不正である。
+
+開始時刻
+: 処理を開始した時刻を示す数値(UNIX秒)。
+
+処理に要した時間
+: 処理を開始してから完了までの間にかかった時間を示す数値。
+

  Added: ja/reference/1.1.0/commands/column-remove/index.md (+107 -0) 100644
===================================================================
--- /dev/null
+++ ja/reference/1.1.0/commands/column-remove/index.md    2014-11-30 23:20:40 +0900 (2597f18)
@@ -0,0 +1,107 @@
+---
+title: column_remove
+layout: en
+---
+
+{% comment %}
+##############################################
+  THIS FILE IS AUTOMATICALLY GENERATED FROM
+  "_po/ja/reference/1.1.0/commands/column-remove/index.po"
+  DO NOT EDIT THIS FILE MANUALLY!
+##############################################
+{% endcomment %}
+
+
+* TOC
+{:toc}
+
+## 概要 {#abstract}
+
+The `column_remove` command removes an existing column in a table.
+
+This is compatible to [the `column_remove` command of the Groonga](http://groonga.org/docs/reference/commands/column_remove.html).
+
+## APIの形式 {#api-types}
+
+### HTTP {#api-types-http}
+
+リクエスト先
+: `(ドキュメントルート)/d/column_remove`
+
+リクエストメソッド
+: `GET`
+
+リクエストのURLパラメータ
+: [パラメータの一覧](#parameters)で定義されている物を指定します。
+
+リクエストのbody
+: なし。
+
+レスポンスのbody
+: [レスポンスメッセージ](#response)。
+
+### REST {#api-types-rest}
+
+対応していません。
+
+### Fluentd {#api-types-fluentd}
+
+形式
+: Request-Response型。コマンドに対しては必ず対応するレスポンスが返されます。
+
+リクエストの `type`
+: `column_remove`
+
+リクエストの `body`
+: [パラメータ](#parameters)のハッシュ。
+
+レスポンスの `type`
+: `column_remove.result`
+
+## パラメータの構文 {#syntax}
+
+    {
+      "table" : "<Name of the table>",
+      "name"  : "<Name of the column>"
+    }
+
+## パラメータの詳細 {#parameters}
+
+All parameters are required.
+
+They are compatible to [the parameters of the `column_remove` command of the Groonga](http://groonga.org/docs/reference/commands/column_remove.html#parameters). See the linked document for more details.
+
+## レスポンス {#response}
+
+このコマンドは、レスポンスの `body` としてコマンドの実行結果に関する情報を格納した配列を返却します。
+
+    [
+      [
+        <Groonga's status code>,
+        <Start time>,
+        <Elapsed time>
+      ],
+      <Column is successfully removed or not>
+    ]
+
+このコマンドはレスポンスの `statusCode` として常に `200` を返します。これは、Groonga互換コマンドのエラー情報はGroongaのそれと同じ形で処理される必要があるためです。
+
+レスポンスの `body` の詳細:
+
+ステータスコード
+: コマンドが正常に受け付けられたかどうかを示す整数値です。以下のいずれかの値をとります。
+  
+   * `0` (`Droonga::GroongaHandler::Status::SUCCESS`) : 正常に処理された。.
+   * `-22` (`Droonga::GroongaHandler::Status::INVALID_ARGUMENT`) : 引数が不正である。
+
+開始時刻
+: 処理を開始した時刻を示す数値(UNIX秒)。
+
+処理に要した時間
+: 処理を開始してから完了までの間にかかった時間を示す数値。
+
+Column is successfully removed or not
+: A boolean value meaning the column was successfully removed or not. Possible values are:
+  
+   * `true`:The column was successfully removed.
+   * `false`:The column was not removed.

  Added: ja/reference/1.1.0/commands/column-rename/index.md (+108 -0) 100644
===================================================================
--- /dev/null
+++ ja/reference/1.1.0/commands/column-rename/index.md    2014-11-30 23:20:40 +0900 (74ca07f)
@@ -0,0 +1,108 @@
+---
+title: column_rename
+layout: ja
+---
+
+{% comment %}
+##############################################
+  THIS FILE IS AUTOMATICALLY GENERATED FROM
+  "_po/ja/reference/1.1.0/commands/column-rename/index.po"
+  DO NOT EDIT THIS FILE MANUALLY!
+##############################################
+{% endcomment %}
+
+
+* TOC
+{:toc}
+
+## 概要 {#abstract}
+
+The `column_rename` command renames an existing column in a table.
+
+This is compatible to [the `column_rename` command of the Groonga](http://groonga.org/docs/reference/commands/column_rename.html).
+
+## APIの形式 {#api-types}
+
+### HTTP {#api-types-http}
+
+リクエスト先
+: `(ドキュメントルート)/d/column_rename`
+
+リクエストメソッド
+: `GET`
+
+リクエストのURLパラメータ
+: [パラメータの一覧](#parameters)で定義されている物を指定します。
+
+リクエストのbody
+: なし。
+
+レスポンスのbody
+: [レスポンスメッセージ](#response)。
+
+### REST {#api-types-rest}
+
+対応していません。
+
+### Fluentd {#api-types-fluentd}
+
+形式
+: Request-Response型。コマンドに対しては必ず対応するレスポンスが返されます。
+
+リクエストの `type`
+: `column_rename`
+
+リクエストの `body`
+: [パラメータ](#parameters)のハッシュ。
+
+レスポンスの `type`
+: `column_rename.result`
+
+## パラメータの構文 {#syntax}
+
+    {
+      "table"    : "<Name of the table>",
+      "name"     : "<Current name of the column>",
+      "new_name" : "<New name of the column>"
+    }
+
+## パラメータの詳細 {#parameters}
+
+All parameters are required.
+
+They are compatible to [the parameters of the `column_rename` command of the Groonga](http://groonga.org/docs/reference/commands/column_rename.html#parameters). See the linked document for more details.
+
+## レスポンス {#response}
+
+このコマンドは、レスポンスの `body` としてコマンドの実行結果に関する情報を格納した配列を返却します。
+
+    [
+      [
+        <Groonga's status code>,
+        <Start time>,
+        <Elapsed time>
+      ],
+      <Column is successfully renamed or not>
+    ]
+
+このコマンドはレスポンスの `statusCode` として常に `200` を返します。これは、Groonga互換コマンドのエラー情報はGroongaのそれと同じ形で処理される必要があるためです。
+
+レスポンスの `body` の詳細:
+
+ステータスコード
+: コマンドが正常に受け付けられたかどうかを示す整数値です。以下のいずれかの値をとります。
+  
+   * `0` (`Droonga::GroongaHandler::Status::SUCCESS`) : 正常に処理された。.
+   * `-22` (`Droonga::GroongaHandler::Status::INVALID_ARGUMENT`) : 引数が不正である。
+
+開始時刻
+: 処理を開始した時刻を示す数値(UNIX秒)。
+
+処理に要した時間
+: 処理を開始してから完了までの間にかかった時間を示す数値。
+
+Column is successfully renamed or not
+: A boolean value meaning the column was successfully renamed or not. Possible values are:
+  
+   * `true`:The column was successfully renamed.
+   * `false`:The column was not renamed.

  Added: ja/reference/1.1.0/commands/delete/index.md (+122 -0) 100644
===================================================================
--- /dev/null
+++ ja/reference/1.1.0/commands/delete/index.md    2014-11-30 23:20:40 +0900 (91f62ce)
@@ -0,0 +1,122 @@
+---
+title: delete
+layout: ja
+---
+
+{% comment %}
+##############################################
+  THIS FILE IS AUTOMATICALLY GENERATED FROM
+  "_po/ja/reference/1.1.0/commands/delete/index.po"
+  DO NOT EDIT THIS FILE MANUALLY!
+##############################################
+{% endcomment %}
+
+
+* TOC
+{:toc}
+
+## 概要 {#abstract}
+
+The `delete` command removes records in a table.
+
+This is compatible to [the `delete` command of the Groonga](http://groonga.org/docs/reference/commands/delete.html).
+
+## APIの形式 {#api-types}
+
+### HTTP {#api-types-http}
+
+リクエスト先
+: `(ドキュメントルート)/d/delete`
+
+リクエストメソッド
+: `GET`
+
+リクエストのURLパラメータ
+: [パラメータの一覧](#parameters)で定義されている物を指定します。
+
+リクエストのbody
+: なし。
+
+レスポンスのbody
+: [レスポンスメッセージ](#response)。
+
+### REST {#api-types-rest}
+
+対応していません。
+
+### Fluentd {#api-types-fluentd}
+
+形式
+: Request-Response型。コマンドに対しては必ず対応するレスポンスが返されます。
+
+リクエストの `type`
+: `delete`
+
+リクエストの `body`
+: [パラメータ](#parameters)のハッシュ。
+
+レスポンスの `type`
+: `delete.result`
+
+## パラメータの構文 {#syntax}
+
+    {
+      "table" : "<Name of the table>",
+      "key"   : "<Key of the record>"
+    }
+
+または
+
+    {
+      "table" : "<Name of the table>",
+      "id"    : "<ID of the record>"
+    }
+
+または
+
+    {
+      "table"  : "<Name of the table>",
+      "filter" : "<Complex search conditions>"
+    }
+
+## パラメータの詳細 {#parameters}
+
+All parameters except `table` are optional.
+However, you must specify one of `key`, `id`, or `filter` to specify the record (records) to be removed.
+
+They are compatible to [the parameters of the `delete` command of the Groonga](http://groonga.org/docs/reference/commands/delete.html#parameters). See the linked document for more details.
+
+## レスポンス {#response}
+
+このコマンドは、レスポンスの `body` としてコマンドの実行結果に関する情報を格納した配列を返却します。
+
+    [
+      [
+        <Groonga's status code>,
+        <Start time>,
+        <Elapsed time>
+      ],
+      <Records are successfully removed or not>
+    ]
+
+このコマンドはレスポンスの `statusCode` として常に `200` を返します。これは、Groonga互換コマンドのエラー情報はGroongaのそれと同じ形で処理される必要があるためです。
+
+レスポンスの `body` の詳細:
+
+ステータスコード
+: コマンドが正常に受け付けられたかどうかを示す整数値です。以下のいずれかの値をとります。
+  
+   * `0` (`Droonga::GroongaHandler::Status::SUCCESS`) : 正常に処理された。.
+   * `-22` (`Droonga::GroongaHandler::Status::INVALID_ARGUMENT`) : 引数が不正である。
+
+開始時刻
+: 処理を開始した時刻を示す数値(UNIX秒)。
+
+処理に要した時間
+: 処理を開始してから完了までの間にかかった時間を示す数値。
+
+Records are successfully removed or not
+: A boolean value meaning specified records were successfully removed or not. Possible values are:
+  
+   * `true`:Records were successfully removed.
+   * `false`:Records were not removed.

  Added: ja/reference/1.1.0/commands/index.md (+35 -0) 100644
===================================================================
--- /dev/null
+++ ja/reference/1.1.0/commands/index.md    2014-11-30 23:20:40 +0900 (746a66a)
@@ -0,0 +1,35 @@
+---
+title: コマンドリファレンス
+layout: ja
+---
+
+{% comment %}
+##############################################
+  THIS FILE IS AUTOMATICALLY GENERATED FROM
+  "_po/ja/reference/1.1.0/commands/index.po"
+  DO NOT EDIT THIS FILE MANUALLY!
+##############################################
+{% endcomment %}
+
+
+以下のコマンドを利用できます。
+
+## ビルトインのコマンド
+
+ * [search](search/): データの検索
+ * [add](add/): レコードの追加
+ * system: クラスタのシステム情報の取得
+   * [system.status](system/status/): クラスタのステータス情報の取得
+
+## Groonga互換コマンド
+
+ * [column_create](column-create/)
+ * [column_list](column-list/)
+ * [column_remove](column-remove/)
+ * [column_rename](column-rename/)
+ * [delete](delete/)
+ * [load](load/)
+ * [select](select/)
+ * [table_create](table-create/)
+ * [table_list](table-list/)
+ * [table_remove](table-remove/)

  Added: ja/reference/1.1.0/commands/load/index.md (+126 -0) 100644
===================================================================
--- /dev/null
+++ ja/reference/1.1.0/commands/load/index.md    2014-11-30 23:20:40 +0900 (d31275f)
@@ -0,0 +1,126 @@
+---
+title: load
+layout: ja
+---
+
+{% comment %}
+##############################################
+  THIS FILE IS AUTOMATICALLY GENERATED FROM
+  "_po/ja/reference/1.1.0/commands/load/index.po"
+  DO NOT EDIT THIS FILE MANUALLY!
+##############################################
+{% endcomment %}
+
+
+* TOC
+{:toc}
+
+## 概要 {#abstract}
+
+The `load` command adds new records to the specified table.
+Column values of existing records are updated by new values, if the table has a primary key and there are existing records with specified keys.
+
+This is compatible to [the `load` command of the Groonga](http://groonga.org/docs/reference/commands/load.html).
+
+## APIの形式 {#api-types}
+
+### HTTP (GET) {#api-types-http-get}
+
+リクエスト先
+: `(ドキュメントルート)/d/load`
+
+リクエストメソッド
+: `GET`
+
+リクエストのURLパラメータ
+: [パラメータの一覧](#parameters)で定義されている物を指定します。
+
+リクエストのbody
+: なし。
+
+レスポンスのbody
+: [レスポンスメッセージ](#response)。
+
+### HTTP (POST) {#api-types-http-post}
+
+リクエスト先
+: `(ドキュメントルート)/d/load`
+
+リクエストメソッド
+: `POST`
+
+リクエストのURLパラメータ
+: [パラメータの一覧](#parameters)で定義されている物のうち、`values` 以外を指定します。
+
+リクエストのbody
+: [パラメータ](#parameters)の `values` 用の値を指定します。
+
+レスポンスのbody
+: [レスポンスメッセージ](#response)。
+
+### REST {#api-types-rest}
+
+対応していません。
+
+### Fluentd {#api-types-fluentd}
+
+対応していません。
+
+## パラメータの構文 {#syntax}
+
+    {
+      "values"     : <Array of records to be loaded>,
+      "table"      : "<Name of the table>",
+      "columns"    : "<List of column names for values, separated by ','>",
+      "ifexists"   : "<Grn_expr to determine records which should be updated>",
+      "input_type" : "<Format type of the values>"
+    }
+
+## パラメータの詳細 {#parameters}
+
+`table` 以外のパラメータはすべて省略可能です。
+
+また、バージョン {{ site.droonga_version }} の時点では以下のパラメータのみが動作します。
+これら以外のパラメータは未実装のため無視されます。
+
+ * `values`
+ * `table`
+ * `columns`
+
+They are compatible to [the parameters of the `load` command of the Groonga](http://groonga.org/docs/reference/commands/load.html#parameters). See the linked document for more details.
+
+HTTP clients can send `values` as an URL parameter with `GET` method, or the request body with `POST` method.
+The URL parameter `values` is always ignored it it is sent with `POST` method.
+You should send data with `POST` method if there is much data.
+
+## レスポンス {#response}
+
+このコマンドは、レスポンスの `body` としてコマンドの実行結果に関する情報を格納した配列を返却します。
+
+    [
+      [
+        <Groonga's status code>,
+        <Start time>,
+        <Elapsed time>
+      ],
+      [<Number of loaded records>]
+    ]
+
+このコマンドはレスポンスの `statusCode` として常に `200` を返します。これは、Groonga互換コマンドのエラー情報はGroongaのそれと同じ形で処理される必要があるためです。
+
+レスポンスの `body` の詳細:
+
+ステータスコード
+: コマンドが正常に受け付けられたかどうかを示す整数値です。以下のいずれかの値をとります。
+  
+   * `0` (`Droonga::GroongaHandler::Status::SUCCESS`) : 正常に処理された。.
+   * `-22` (`Droonga::GroongaHandler::Status::INVALID_ARGUMENT`) : 引数が不正である。
+
+開始時刻
+: 処理を開始した時刻を示す数値(UNIX秒)。
+
+処理に要した時間
+: 処理を開始してから完了までの間にかかった時間を示す数値。
+
+Number of loaded records
+: An positive integer meaning the number of added or updated records.

  Added: ja/reference/1.1.0/commands/search/index.md (+1382 -0) 100644
===================================================================
--- /dev/null
+++ ja/reference/1.1.0/commands/search/index.md    2014-11-30 23:20:40 +0900 (8830940)
@@ -0,0 +1,1382 @@
+---
+title: search
+layout: ja
+---
+
+{% comment %}
+##############################################
+  THIS FILE IS AUTOMATICALLY GENERATED FROM
+  "_po/ja/reference/1.1.0/commands/search/index.po"
+  DO NOT EDIT THIS FILE MANUALLY!
+##############################################
+{% endcomment %}
+
+
+* TOC
+{:toc}
+
+## 概要 {#abstract}
+
+`search` は、1つ以上のテーブルから指定された条件にマッチするレコードを検索し、見つかったレコードに関する情報を返却します。
+
+これは、Droonga において検索機能を提供する最も低レベルのコマンドです。
+検索用のコマンドをプラグインとして実装する際は、内部的にこのコマンドを使用して検索を行うという用途が想定されます。
+
+## APIの形式 {#api-types}
+
+### HTTP {#api-types-http}
+
+リクエスト先
+: `(ドキュメントルート)/droonga/search`
+
+リクエストメソッド
+: `POST`
+
+リクエストのURLパラメータ
+: なし。
+
+リクエストのbody
+: [パラメータ](#parameters)のハッシュ。
+
+レスポンスのbody
+: [レスポンスメッセージ](#response)。
+
+### REST {#api-types-rest}
+
+リクエスト先
+: `(ドキュメントルート)/tables/(テーブル名)`
+
+リクエストメソッド
+: `GET`
+
+リクエストのURLパラメータ
+: [検索リクエストのパラメータ](#parameters)に対応する以下のパラメータを受け付けます:
+  
+   * `query`: [`(root).(テーブル名).condition.query`](#usage-condition-query-syntax) に対応する文字列。
+   * `match_to`: [`(root).(テーブル名).condition.matchTo`](#usage-condition-query-syntax) に対応するカンマ区切りの文字列。
+   * `sort_by`:  [`(root).(テーブル名).sortBy`](#query-sortBy) に対応するカンマ区切りの文字列。
+   * `attributes`: [`(root).(テーブル名).output.attributes`](#query-output) に対応するカンマ区切りの文字列。
+   * `offset`: [`(root).(テーブル名).output.offset`](#query-output) に対応する整数。
+   * `limit`: [`(root).(テーブル名).output.limit`](#query-output) に対応する整数。
+   * `timeout`: [`(root).timeout`](#parameter-timeout) に対応する整数。
+
+<!--
+   * `group_by[(column name)][key]`: A string, applied to [`(root).(column name).groupBy.key`](#query-groupBy).
+   * `group_by[(column name)][max_n_sub_records]`: An integer, applied to [`(root).(column name).groupBy.maxNSubRecords`](#query-groupBy).
+   * `group_by[(column name)][attributes]`: A comma-separated string, applied to [`(root).(column name).output.attributes`](#query-output).
+   * `group_by[(column name)][attributes][(attribute name)][source]`: A string, applied to [`(root).(column name).output.attributes.(attribute name).source`](#query-output).
+   * `group_by[(column name)][attributes][(attribute name)][attributes]`: A comma-separated string, applied to [`(root).(column name).output.attributes.(attribute name).attributes`](#query-output).
+   * `group_by[(column name)][limit]`: An integer, applied to [`(root).(column name).output.limit`](#query-output).
+-->
+  
+  例:
+  
+   * `/tables/Store?query=NY&match_to=_key&attributes=_key,*&limit=10`
+
+<!--
+   * `/tables/Store?query=NY&match_to=_key&attributes=_key,*&limit=10&group_by[location][key]=location&group_by[location][limit]=5&group_by[location][attributes]=_key,_nsubrecs`
+   * `/tables/Store?query=NY&match_to=_key&attributes=_key,*&limit=10&group_by[location][key]=location&group_by[location][limit]=5&group_by[location][attributes][_key][souce]=_key&group_by[location][attributes][_nsubrecs][souce]=_nsubrecs`
+   * `/tables/Store?query=NY&match_to=_key&limit=0&group_by[location][key]=location&group_by[location][max_n_sub_records]=5&group_by[location][limit]=5&group_by[location][attributes][sub_records][source]=_subrecs&group_by[location][attributes][sub_records][attributes]=_key,location`
+-->
+
+リクエストのbody
+: なし。
+
+レスポンスのbody
+: [レスポンスメッセージ](#response)。
+
+### Fluentd {#api-types-fluentd}
+
+形式
+: Request-Response型。コマンドに対しては必ず対応するレスポンスが返されます。
+
+リクエストの `type`
+: `search`
+
+リクエストの `body`
+: [パラメータ](#parameters)のハッシュ。
+
+レスポンスの `type`
+: `search.result`
+
+## パラメータの構文 {#syntax}
+
+    {
+      "timeout" : <タイムアウトするまでの秒数>,
+      "queries" : {
+        "<クエリ1の名前>" : {
+          "source"    : "<検索対象のテーブル名、または別の検索クエリの名前>",
+          "condition" : <検索条件>,
+          "sortBy"    : <ソートの条件>,
+          "groupBy"   : <集約の条件>,
+          "output"    : <出力の指定>
+        },
+        "<クエリ2の名前>" : { ... },
+        ...
+      }
+    }
+
+## 使い方 {#usage}
+
+この項では、以下のテーブルが存在する状態を前提として、典型的な使い方を通じて `search` コマンドの使い方を説明します。
+
+Personテーブル(主キーあり):
+
+|_key|name|age|sex|job|note|
+|Alice Arnold|Alice Arnold|20|female|announcer||
+|Alice Cooper|Alice Cooper|30|male|musician||
+|Alice Miller|Alice Miller|25|female|doctor||
+|Bob Dole|Bob Dole|42|male|lawer||
+|Bob Cousy|Bob Cousy|38|male|basketball player||
+|Bob Wolcott|Bob Wolcott|36|male|baseball player||
+|Bob Evans|Bob Evans|31|male|driver||
+|Bob Ross|Bob Ross|54|male|painter||
+|Lewis Carroll|Lewis Carroll|66|male|writer|the author of Alice's Adventures in Wonderland|
+
+※`name`、`note` には `TokensBigram` を使用したインデックスが用意されていると仮定します。
+
+### 基本的な使い方 {#usage-basic}
+
+最も単純な例として、Person テーブルのすべてのレコードを出力する例を示します。
+
+    {
+      "type" : "search",
+      "body" : {
+        "queries" : {
+          "people" : {
+            "source" : "Person",
+            "output" : {
+              "elements"   : ["count", "records"],
+              "attributes" : ["_key", "*"],
+              "limit"      : -1
+            }
+          }
+        }
+      }
+    }
+    
+    => {
+         "type" : "search.result",
+         "body" : {
+           "people" : {
+             "count" : 9,
+             "records" : [
+               ["Alice Arnold", "Alice Arnold", 20, "female", "announcer", ""],
+               ["Alice Cooper", "Alice Cooper", 30, "male", "musician", ""],
+               ["Alice Miller", "Alice Miller", 25, "male", "doctor", ""],
+               ["Bob Dole", "Bob Dole", 42, "male", "lawer", ""],
+               ["Bob Cousy", "Bob Cousy", 38, "male", "basketball player", ""],
+               ["Bob Wolcott", "Bob Wolcott", 36, "male", "baseball player", ""],
+               ["Bob Evans", "Bob Evans", 31, "male", "driver", ""],
+               ["Bob Ross", "Bob Ross", 54, "male", "painter", ""],
+               ["Lewis Carroll", "Lewis Carroll", 66, "male", "writer",
+                "the author of Alice's Adventures in Wonderland"]
+             ]
+           }
+         }
+       }
+
+`people` は、この検索クエリおよびその処理結果に対して付けた一時的な名前です。
+`search` のレスポンスは、検索クエリに付けた名前を伴って返されます。
+よって、これは「この検索クエリの結果を `people` と呼ぶ」というような意味合いになります。
+
+どうしてこのコマンドが全レコードのすべての情報を出力するのでしょうか? これは以下の理由に依ります。
+
+ * 検索条件を何も指定していないため。検索条件を指定しないとすべてのレコードがマッチします。
+ * [`output`](#query-output) の `elements` パラメータに `records` (および `count`)が指定されているため。 `elements` は結果に出力する情報を制御します。マッチしたレコードの情報は `records` に、マッチしたレコードの総数は `count` に出力されます。
+ * [`output`](#query-output) の `limit` パラメータに `-1` が指定されているため。 `limit` は出力するレコードの最大数の指定ですが、 `-1` を指定するとすべてのレコードが出力されます。
+ * [`output`](#query-output) の `attributes` パラメータに `"_key"` と `"*"` の2つが指定されているため(これは「`_key` を含む Person テーブルのすべてのカラムを出力する」という指定で、`["_key", "name", "age", "sex", "job", "note"]` と書くのに等しいです)。 `attributes` は個々のレコードに出力するカラムを制御します。
+
+
+#### 検索条件 {#usage-condition}
+
+検索条件は `condition` パラメータで指定します。指定方法は、大きく分けて「スクリプト構文形式」と「クエリー構文形式」の2通りがあります。詳細は [`condition` パラメータの仕様](#query-condition) を参照して下さい。
+
+##### スクリプト構文形式の検索条件 {#usage-condition-script-syntax}
+
+スクリプト構文形式は、ECMAScriptの書式に似ています。「`name` に `Alice` を含み、且つ`age` が `25` 以上である」という検索条件は、スクリプト構文形式で以下のように表現できます。
+
+    {
+      "type" : "search",
+      "body" : {
+        "queries" : {
+          "people" : {
+            "source"    : "Person",
+            "condition" : "name @ 'Alice' && age >= 25"
+            "output"    : {
+              "elements"   : ["count", "records"],
+              "attributes" : ["name", "age"],
+              "limit"      : -1
+            }
+          }
+        }
+      }
+    }
+
+    => {
+         "type" : "search.result",
+         "body" : {
+           "people" : {
+             "count" : 2,
+             "records" : [
+               ["Alice Arnold", 20],
+               ["Alice Cooper", 30],
+               ["Alice Miller", 25]
+             ]
+           }
+         }
+       }
+
+スクリプト構文の詳細な仕様は[Groonga のスクリプト構文のリファレンス](http://groonga.org/ja/docs/reference/grn_expr/script_syntax.html)を参照して下さい。
+
+##### クエリー構文形式の検索条件 {#usage-condition-query-syntax}
+
+クエリー構文形式は、主にWebページなどに組み込む検索ボックス向けに用意されています。例えば「検索ボックスに入力された語句を `name` または `note` に含むレコードを検索する」という場面において、検索ボックスに入力された語句が `Alice` であった場合の検索条件は、クエリー構文形式で以下のように表現できます。
+
+    {
+      "type" : "search",
+      "body" : {
+        "queries" : {
+          "people" : {
+            "source"    : "Person",
+            "condition" : {
+              "query"   : "Alice",
+              "matchTo" : ["name", "note"]
+            },
+            "output"    : {
+              "elements"   : ["count", "records"],
+              "attributes" : ["name", "note"],
+              "limit"      : -1
+            }
+          }
+        }
+      }
+    }
+    
+    => {
+         "type" : "search.result",
+         "body" : {
+           "people" : {
+             "count" : 4,
+             "records" : [
+               ["Alice Arnold", ""],
+               ["Alice Cooper", ""],
+               ["Alice Miller", ""],
+               ["Lewis Carroll",
+                "the author of Alice's Adventures in Wonderland"]
+             ]
+           }
+         }
+       }
+
+クエリー構文の詳細な仕様は[Groonga のクエリー構文のリファレンス](http://groonga.org/ja/docs/reference/grn_expr/query_syntax.html)を参照して下さい。
+
+
+#### 検索結果のソート {#usage-sort}
+
+出力するレコードのソート条件は `sortBy` パラメータで指定します。以下は、結果を `age` カラムの値の昇順でソートする場合の例です。
+
+    {
+      "type" : "search",
+      "body" : {
+        "queries" : {
+          "people" : {
+            "source"    : "Person",
+            "condition" : "name @ 'Alice'"
+            "sortBy"    : ["age"],
+            "output"    : {
+              "elements"   : ["count", "records"],
+              "attributes" : ["name", "age"],
+              "limit"      : -1
+            }
+          }
+        }
+      }
+    }
+    
+    => {
+         "type" : "search.result",
+         "body" : {
+           "people" : {
+             "count" : 8,
+             "records" : [
+               ["Alice Arnold", 20],
+               ["Alice Miller", 25],
+               ["Alice Cooper", 30]
+             ]
+           }
+         }
+       }
+
+ソートするカラム名の前に `-` を付けると、降順でのソートになります。以下は `age` の降順でソートする場合の例です。
+
+    {
+      "type" : "search",
+      "body" : {
+        "queries" : {
+          "people" : {
+            "source"    : "Person",
+            "condition" : "name @ 'Alice'"
+            "sortBy"    : ["-age"],
+            "output"    : {
+              "elements"   : ["count", "records"],
+              "attributes" : ["name", "age"],
+              "limit"      : -1
+            }
+          }
+        }
+      }
+    }
+    
+    => {
+         "type" : "search.result",
+         "body" : {
+           "people" : {
+             "count" : 8,
+             "records" : [
+               ["Alice Cooper", 30],
+               ["Alice Miller", 25],
+               ["Alice Arnold", 20]
+             ]
+           }
+         }
+       }
+
+詳細は [`sortBy` パラメータの仕様](#query-sortBy) を参照して下さい。
+
+#### 検索結果のページング {#usage-paging}
+
+[`output`](#query-output) パラメータの `offset` と `limit` を指定することで、出力されるレコードの範囲を指定できます。以下は、20件以上ある結果を先頭から順に10件ずつ取得する場合の例です。
+
+    {
+      "type" : "search",
+      "body" : {
+        "queries" : {
+          "people" : {
+            "source" : "Person",
+            "output" : {
+              "elements"   : ["count", "records"],
+              "attributes" : ["name"],
+              "offset"     : 0,
+              "limit"      : 10
+            }
+          }
+        }
+      }
+    }
+    
+    => 0件目から9件目までの10件が返される。
+    
+    {
+      "type" : "search",
+      "body" : {
+        "queries" : {
+          "people" : {
+            "source" : "Person",
+            "output" : {
+              "elements"   : ["count", "records"],
+              "attributes" : ["name"],
+              "offset"     : 10,
+              "limit"      : 10
+            }
+          }
+        }
+      }
+    }
+    
+    => 10件目から19件目までの10件が返される。
+    
+    {
+      "type" : "search",
+      "body" : {
+        "queries" : {
+          "people" : {
+            "source" : "Person",
+            "output" : {
+              "elements"   : ["count", "records"],
+              "attributes" : ["name"],
+              "offset"     : 20,
+              "limit"      : 10
+            }
+          }
+        }
+      }
+    }
+    
+    => 20件目から29件目までの10件が返される。
+
+`limit` の指定 `-1` は、実際の運用では推奨されません。膨大な量のレコードがマッチした場合、出力のための処理にリソースを使いすぎてしまいますし、ネットワークの帯域も浪費してしまいます。コンピュータの性能にもよりますが、`limit` には `100` 程度までの値を上限として指定し、それ以上のレコードは適宜ページングで取得するようにして下さい。
+
+詳細は [`output` パラメータの仕様](#query-output) を参照して下さい。
+
+また、ページングは [`sortBy` パラメータの機能](#query-sortBy-hash)でも行う事ができ、一般的にはそちらの方が高速に動作します。
+よって、可能な限り `output` でのページングよりも `sortBy` でのページングの方を使う事が推奨されます。
+
+
+#### 出力形式 {#usage-format}
+
+ここまでの例では、レコードの一覧はすべて配列の配列として出力されていました。[`output`](#query-output) パラメータの `format` を指定すると、出力されるレコードの形式を変える事ができます。以下は、`format` に `complex` を指定した場合の例です。
+
+    {
+      "type" : "search",
+      "body" : {
+        "queries" : {
+          "people" : {
+            "source" : "Person",
+            "output" : {
+              "elements"   : ["count", "records"],
+              "attributes" : ["_key", "name", "age", "sex", "job", "note"],
+              "limit"      : 3,
+              "format"     : "complex"
+            }
+          }
+        }
+      }
+    }
+    
+    => {
+         "type" : "search.result",
+         "body" : {
+           "people" : {
+             "count" : 9,
+             "records" : [
+               { "_key" : "Alice Arnold",
+                 "name" : "Alice Arnold",
+                 "age"  : 20,
+                 "sex"  : "female",
+                 "job"  : "announcer",
+                 "note" : "" },
+               { "_key" : "Alice Cooper",
+                 "name" : "Alice Cooper",
+                 "age"  : 30,
+                 "sex"  : "male",
+                 "job"  : "musician",
+                 "note" : "" },
+               { "_key" : "Alice Miller",
+                 "name" : "Alice Miller",
+                 "age"  : 25,
+                 "sex"  : "female",
+                 "job"  : "doctor",
+                 "note" : "" }
+             ]
+           }
+         }
+       }
+
+`format` に `complex` を指定した場合、レコードの一覧はこの例のようにカラム名をキーとしたハッシュの配列として出力されます。
+`format` に `simple` を指定した場合、または `format` の指定を省略した場合、レコードの一覧は配列の配列として出力されます。
+
+詳細は [`output` パラメータの仕様](#query-output) および [レスポンスの仕様](#response) を参照して下さい。
+
+
+### 高度な使い方 {#usage-advanced}
+
+#### 検索結果の集約 {#usage-group}
+
+[`groupBy`](#query-groupBy) パラメータを指定することで、レコードを指定カラムの値で集約した結果を取得することができます。以下は、テーブルの内容を `sex` カラムの値で集約した結果と、集約前のレコードがそれぞれ何件あったかを取得する例です。
+
+    {
+      "type" : "search",
+      "body" : {
+        "queries" : {
+          "sexuality" : {
+            "source"  : "Person",
+            "groupBy" : "sex",
+            "output"  : {
+              "elements"   : ["count", "records"],
+              "attributes" : ["_key", "_nsubrecs"],
+              "limit"      : -1
+            }
+          }
+        }
+      }
+    }
+    
+    => {
+         "type" : "search.result",
+         "body" : {
+           "sexuality" : {
+             "count" : 2,
+             "records" :
+               ["female", 2],
+               ["male", 7]
+             ]
+           }
+         }
+       }
+
+上記の結果は、 `sex` の値が `female` であるレコードが2件、`male` であるレコードが7件存在していて、`sex` の値の種類としては2通りが登録されている事を示しています。
+
+また、集約前のレコードを代表値として取得する事もできます。以下は、`sex` カラムの値で集約した結果と、それぞれの集約前のレコードを2件ずつ取得する例です。
+
+    {
+      "type" : "search",
+      "body" : {
+        "queries" : {
+          "sexuality" : {
+            "source"  : "Person",
+            "groupBy" : {
+              "keys"           : "sex",
+              "maxNSubRecords" : 2
+            },
+            "output"  : {
+              "elements"   : ["count", "records"],
+              "attributes" : [
+                "_key",
+                "_nsubrecs",
+                { "label"      : "subrecords",
+                  "source"     : "_subrecs",
+                  "attributes" : ["name"] }
+              ],
+              "limit"      : -1
+            }
+          }
+        }
+      }
+    }
+    
+    => {
+         "type" : "search.result",
+         "body" : {
+           "sexuality" : {
+             "count" : 2,
+             "records" :
+               ["female", 2, [["Alice Arnold"], ["Alice Miller"]]],
+               ["male",   7, [["Alice Cooper"], ["Bob Dole"]]]
+             ]
+           }
+         }
+       }
+
+
+詳細は [`groupBy` パラメータの仕様](#query-groupBy) を参照して下さい。
+
+
+#### 複数の検索クエリの列挙 {#usage-multiple-queries}
+
+`search` は、一度に複数の検索クエリを受け付ける事ができます。以下は、`age` が `25` 以下のレコードと `age` が `40` 以上のレコードを同時に検索する例です。
+
+    {
+      "type" : "search",
+      "body" : {
+        "queries" : {
+          "junior" : {
+            "source"    : "Person",
+            "condition" : "age <= 25",
+            "output"    : {
+              "elements"   : ["count", "records"],
+              "attributes" : ["name", "age"],
+              "limit"      : -1
+            }
+          },
+          "senior" : {
+            "source"    : "Person",
+            "condition" : "age >= 40",
+            "output"    : {
+              "elements"   : ["count", "records"],
+              "attributes" : ["name", "age"],
+              "limit"      : -1
+            }
+          }
+        }
+      }
+    }
+    
+    => {
+         "type" : "search.result",
+         "body" : {
+           "junior" : {
+             "count" : 2,
+             "records" : [
+               ["Alice Arnold", 20],
+               ["Alice Miller", 25]
+             ]
+           },
+           "senior" : {
+             "count" : 3,
+             "records" : [
+               ["Bob Dole", 42],
+               ["Bob Ross", 54],
+               ["Lewis Carroll", 66]
+             ]
+           }
+         }
+       }
+
+レスポンスに含まれる検索結果は、各クエリに付けた一時的な名前で識別することになります。
+
+#### 検索のチェーン {#usage-chain}
+
+検索クエリを列挙する際は、`source` パラメータの値として実在するテーブルの名前だけでなく、別の検索クエリに付けた一時的な名前を指定する事ができます。これにより、1つの検索クエリでは表現できない複雑な検索を行う事ができます。
+
+以下は、Personテーブルについて `name` カラムが `Alice` を含んでいるレコードを検索た結果と、それをさらに `sex` カラムの値で集約した結果を同時に取得する例です。
+
+    {
+      "type" : "search",
+      "body" : {
+        "queries" : {
+          "people" : {
+            "source"    : "Person",
+            "condition" : "name @ 'Alice'"
+            "output"    : {
+              "elements"   : ["count", "records"],
+              "attributes" : ["name", "age"],
+              "limit"      : -1
+            }
+          },
+          "sexuality" : {
+            "source"  : "people",
+            "groupBy" : "sex",
+            "output"  : {
+              "elements"   : ["count", "records"],
+              "attributes" : ["_key", "_nsubrecs"],
+              "limit"      : -1
+            }
+          }
+        }
+      }
+    }
+    
+    => {
+         "type" : "search.result",
+         "body" : {
+           "people" : {
+             "count" : 8,
+             "records" : [
+               ["Alice Cooper", 30],
+               ["Alice Miller", 25],
+               ["Alice Arnold", 20]
+             ]
+           },
+           "sexuality" : {
+             "count" : 2,
+             "records" :
+               ["female", 2],
+               ["male", 1]
+             ]
+           }
+         }
+       }
+
+個々の検索クエリの結果は出力しない(中間テーブルとしてのみ使う)事もできます。
+以下は、Personテーブルについて `job` カラムの値で集約した結果をまず求め、そこからさらに `player` という語句を含んでいる項目に絞り込む例です。
+(※この場合の2つ目の検索ではインデックスが使用されないため、検索処理が遅くなる可能性があります。)
+
+    {
+      "type" : "search",
+      "body" : {
+        "queries" : {
+          "allJob" : {
+            "source"  : "Person",
+            "groupBy" : "job"
+          },
+          "playerJob" : {
+            "source"    : "allJob",
+            "condition" : "_key @ `player`",
+            "output"  : {
+              "elements"   : ["count", "records"],
+              "attributes" : ["_key", "_nsubrecs"],
+              "limit"      : -1
+            }
+          }
+        }
+      }
+    }
+    
+    => {
+         "type" : "search.result",
+         "body" : {
+           "playerJob" : {
+             "count" : 2,
+             "records" : [
+               ["basketball player", 1],
+               ["baseball player", 1]
+             ]
+           }
+         }
+       }
+
+
+## パラメータの詳細 {#parameters}
+
+### 全体のパラメータ {#container-parameters}
+
+#### `timeout` {#parameter-timeout}
+
+※註:このパラメータはバージョン {{ site.droonga_version }} では未実装です。指定しても機能しません。
+
+概要
+: 検索処理がタイムアウトするまでの時間を指定します。
+
+値
+: タイムアウトするまでの時間の数値(単位:ミリ秒)。
+
+省略時の初期値
+: 10000(10秒)
+
+指定した時間以内に Droonga Engine が検索の処理を完了できなかった場合、Droonga はその時点で検索処理を打ち切り、エラーを返却します。
+クライアントは、この時間を過ぎた後は検索処理に関するリソースを解放して問題ありません。
+
+#### `queries` {#parameter-queries}
+
+概要
+: 検索クエリとして、検索の条件と出力の形式を指定します。
+
+値
+: 個々の検索クエリの名前をキー、[個々の検索クエリ](#query-parameters)の内容を値としたハッシュ。
+
+省略時の既定値
+: なし。このパラメータは必須です。
+
+`search` は、複数の検索クエリを一度に受け取る事ができます。
+
+バージョン {{ site.droonga_version }} ではすべての検索クエリの結果を一度にレスポンスとして返却する動作のみ対応していますが、将来的には、それぞれの検索クエリの結果を分割して受け取る(結果が出た物からバラバラに受け取る)動作にも対応する予定です。
+
+### 個々の検索クエリのパラメータ {#query-parameters}
+
+#### `source` {#query-source}
+
+概要
+: 検索対象とするデータソースを指定します。
+
+値
+: テーブル名の文字列、または結果を参照する別の検索クエリの名前の文字列。
+
+省略時の既定値
+: なし。このパラメータは必須です。
+
+別の検索クエリの処理結果をデータソースとして指定する事により、ファセット検索などを行う事ができます。
+
+なお、その場合の各検索クエリの実行順(依存関係)は Droonga が自動的に解決します。
+依存関係の順番通りに各検索クエリを並べて記述する必要はありません。
+
+#### `condition` {#query-condition}
+
+概要
+: 検索の条件を指定します。
+
+値
+: 以下のパターンのいずれかをとります。
+  
+  1. [スクリプト構文](http://groonga.org/ja/docs/reference/grn_expr/script_syntax.html)形式の文字列。
+  2. [スクリプト構文](http://groonga.org/ja/docs/reference/grn_expr/script_syntax.html)形式の文字列を含むハッシュ。
+  3. [クエリー構文](http://groonga.org/ja/docs/reference/grn_expr/query_syntax.html)形式の文字列を含むハッシュ。
+  4. 1〜3および演算子の文字列の配列。 
+
+省略時の既定値
+: なし。
+
+検索条件を指定しなかった場合、データソースに含まれるすべてのレコードが検索結果として取り出され、その後の処理に使われます。
+
+##### スクリプト構文形式の文字列による検索条件 {#query-condition-script-syntax-string}
+
+以下のような形式の文字列で検索条件を指定します。
+
+    "name == 'Alice' && age >= 20"
+
+上記の例は「 `name` カラムの値が `"Alice"` と等しく、且つ `age` カラムの値が20以上である」という意味になります。
+
+詳細は[Groonga のスクリプト構文のリファレンス](http://groonga.org/ja/docs/reference/grn_expr/script_syntax.html)を参照して下さい。
+
+##### スクリプト構文形式の文字列を含むハッシュによる検索条件 {#query-condition-script-syntax-hash}
+
+[スクリプト構文形式の文字列による検索条件](#query-condition-script-syntax-string)をベースとした、以下のような形式のハッシュで検索条件を指定します。
+
+    {
+      "script"      : "name == 'Alice' && age >= 20",
+      "allowUpdate" : true
+    }
+
+(詳細未稿:仕様が未確定、動作が不明、未実装のため)
+
+##### クエリー構文形式の文字列を含むハッシュ {#query-condition-query-syntax-hash}
+
+以下のような形式のハッシュで検索条件を指定します。
+
+    {
+      "query"                    : "Alice",
+      "matchTo"                  : ["name * 2", "job * 1"],
+      "defaultOperator"          : "&&",
+      "allowPragma"              : true,
+      "allowColumn"              : true,
+      "matchEscalationThreshold" : 10
+    }
+
+`query`
+: クエリを文字列で指定します。
+  詳細は[Groonga のクエリー構文の仕様](http://groonga.org/ja/docs/reference/grn_expr/query_syntax.html)を参照して下さい。
+  このパラメータは省略できません。
+
+
+: 検索対象のカラムを、カラム名の文字列またはその配列で指定します。
+  カラム名の後に `name * 2` のような指定を加える事で、重み付けができます。
+  このパラメータは省略可能で、省略時の初期値は `"_key"` です。
+
+`defaultOperator`: `query` に複数のクエリが列挙されている場合の既定の論理演算の条件を指定します。
+  以下のいずれかの文字列を指定します。
+  
+   * `"&&"` : AND条件と見なす。
+   * `"||"` : OR条件と見なす。
+   * `"-"`  : [論理否定](http://groonga.org/ja/docs/reference/grn_expr/query_syntax.html#logical-not)条件と見なす。
+  
+  このパラメータは省略可能で、省略時の初期値は `"&&"` です。
+
+`allowPragma`
+: `query` の先頭において、`*E-1` のようなプラグマの指定を許容するかどうかを真偽値で指定します。
+  このパラメータは省略可能で、省略時の初期値は `true` (プラグマの指定を許容する)です。
+
+`allowColumn`
+: `query` において、カラム名を指定した `name:Alice` のような書き方を許容するかどうかを真偽値で指定します。
+  このパラメータは省略可能で、省略時の初期値は `true` (カラム名の指定を許容する)です。
+
+`allowLeadingNot`
+: `query` において、最初のクエリに `-foobar` のような否定演算子が登場することを許容するかどうかを真偽値で指定します。
+  このパラメータは省略可能で、省略時の初期値は `false` (最初のクエリでの否定演算子を許容しない)です。
+
+`matchEscalationThreshold`
+: 検索方法をエスカレーションするかどうかを決定するための閾値を指定します。
+  インデックスを用いた全文検索のヒット件数がこの閾値以下であった場合は、非分かち書き検索、部分一致検索へエスカレーションします。
+  詳細は [Groonga の検索の仕様の説明](http://groonga.org/ja/docs/spec/search.html)を参照して下さい。
+  このパラメータは省略可能で、省略時の初期値は `0` です。
+
+
+##### 配列による検索条件 {#query-condition-array}
+
+以下のような形式の配列で検索条件を指定します。
+
+    [
+      "&&",
+      <検索条件1>,
+      <検索条件2>,
+      ...
+    ]
+
+配列の最初の要素は、論理演算子を以下のいずれかの文字列で指定します。
+
+ * `"&&"` : AND条件と見なす。
+ * `"||"` : OR条件と見なす。
+ * `"-"`  : [論理否定](http://groonga.org/ja/docs/reference/grn_expr/query_syntax.html#logical-not)条件と見なす。
+
+配列の2番目以降の要素で示された検索条件について、1番目の要素で指定した論理演算子による論理演算を行います。
+例えば以下は、スクリプト構文形式の文字列による検索条件2つによるAND条件であると見なされ、「 `name` カラムの値が `"Alice"` と等しく、且つ `age` カラムの値が20以上である」という意味になります。
+
+    ["&&", "name == 'Alice'", "age >= 20"]
+
+配列を入れ子にする事により、より複雑な検索条件を指定する事もできます。
+例えば以下は、「 `name` カラムの値が `"Alice"` と等しく、且つ `age` カラムの値が20以上であるが、 `job` カラムの値が `"engineer"` ではない」という意味になります。
+
+    [
+      "-",
+      ["&&", "name == 'Alice'", "age >= 20"],
+      "job == 'engineer'"
+    ]
+
+#### `sortBy` {#query-sortBy}
+
+概要
+: ソートの条件および取り出すレコードの範囲を指定します。
+
+値
+: 以下のパターンのいずれかをとります。
+  
+  1. カラム名の文字列の配列。
+  2. ソート条件と取り出すレコードの範囲を指定するハッシュ。 
+
+省略時の既定値
+: なし。
+
+ソート条件が指定されなかった場合、すべての検索結果がそのままの並び順でソート結果として取り出され、その後の処理に使われます。
+
+##### 基本的なソート条件の指定 {#query-sortBy-array}
+
+ソート条件はカラム名の文字列の配列として指定します。
+
+Droongaはまず最初に指定したカラムの値でレコードをソートし、カラムの値が同じレコードが複数あった場合は2番目に指定したカラムの値でさらにソートする、という形で、すべての指定カラムの値に基づいてソートを行います。
+
+ソート対象のカラムを1つだけ指定する場合であっても、必ず配列として指定する必要があります。
+
+ソート順序は指定したカラムの値での昇順となります。カラム名の前に `-` を加えると降順となります。
+
+例えば以下は、「 `name` の値で昇順にソートし、同じ値のレコードはさらに `age` の値で降順にソートする」という意味になります。
+
+    ["name", "-age"]
+
+##### ソート結果から取り出すレコードの範囲の指定 {#query-sortBy-hash}
+
+ソートの指定において、以下の形式でソート結果から取り出すレコードの範囲を指定する事ができます。
+
+    {
+      "keys"   : [<ソート対象のカラム>],
+      "offset" : <ページングの起点>,
+      "limit"  : <取り出すレコード数>
+    }
+
+`keys`
+: ソート条件を[基本的なソート条件の指定](#query-sortBy-array)の形式で指定します。
+  このパラメータは省略できません。
+
+`offset`
+: 取り出すレコードのページングの起点を示す `0` または正の整数。
+  
+  このパラメータは省略可能で、省略時の既定値は `0` です。
+
+`limit`
+: 取り出すレコード数を示す `-1` 、 `0` 、または正の整数。
+  `-1`を指定すると、すべてのレコードを取り出します。
+  
+  このパラメータは省略可能で、省略時の既定値は `-1` です。
+
+例えば以下は、ソート結果の10番目から19番目までの10件のレコードを取り出すという意味になります。
+
+    {
+      "keys"   : ["name", "-age"],
+      "offset" : 10,
+      "limit"  : 10
+    }
+
+これらの指定を行った場合、取り出されたレコードのみがその後の処理の対象となります。
+そのため、 `output` における `offset` および `limit` の指定よりも高速に動作します。
+
+
+#### `groupBy` {#query-groupBy}
+
+概要
+: 処理対象のレコード群を集約する条件を指定します。
+
+値
+: 以下のパターンのいずれかをとります。
+  
+  1. 基本的な集約条件(カラム名または式)の文字列。
+  2. 複雑な集約条件を指定するハッシュ。 
+
+省略時の既定値
+: なし。
+
+集約条件を指定した場合、指定に基づいてレコードを集約した結果がレコードとして取り出され、その後の処理に使われます。
+
+##### 基本的な集約条件の指定 {#query-groupBy-string}
+
+基本的な集約条件では、処理対象のレコード群が持つカラムの名前を文字列として指定します。
+
+Droongaはそのカラムの値が同じであるレコードを集約し、カラムの値をキーとした新しいレコード群を結果として出力します。
+集約結果のレコードは以下のカラムを持ちます。
+
+`_key`
+: 集約前のレコード群における、集約対象のカラムの値です。
+
+`_nsubrecs`
+: 集約前のレコード群における、集約対象のカラムの値が一致するレコードの総数を示す数値です。
+
+例えば以下は、`job` カラムの値でレコードを集約し、`job` カラムの値としてどれだけの種類が存在しているのか、および、各 `job` の値を持つレコードが何件存在しているのかを集約結果として取り出すという意味になります。
+
+    "job"
+
+##### 複雑な集約条件の指定 {#query-groupBy-hash}
+
+集約の指定において、集約結果の一部として出力する集約前のレコードの数を、以下の形式で指定する事ができます。
+
+    {
+      "key"            : "<基本的な集約条件>",
+      "maxNSubRecords" : <集約結果の一部として出力する集約前のレコードの数>
+    }
+
+`key`
+: [基本的な集約条件の指定](#query-groupBy-string)の形式による、集約条件を指定する文字列。
+  このパラメータは省略できません。
+
+`maxNSubRecords`
+: 集約結果の一部として出力する集約前のレコードの最大数を示す `0` または正の整数。
+  `-1` は指定できません。
+  
+  このパラメータは省略可能で、省略時の既定値は `0` です。
+  
+例えば以下は、`job` カラムの値でレコードを集約した結果について、各 `job` カラムの値を含んでいるレコードを代表として1件ずつ取り出すという意味になります。
+
+    {
+      "key"            : "job",
+      "maxNSubRecords" : 1
+    }
+
+集約結果のレコードは、[基本的な集約条件の指定](#query-groupBy-string)の集約結果のレコード群が持つすべてのカラムに加えて、以下のカラムを持ちます。
+
+`_subrecs`
+: 集約前のレコード群における、集約対象のカラムの値が一致するレコードの配列。
+  
+※バージョン {{ site.droonga_version }} では、データセットが複数のボリュームに別れている場合、集約前のレコードの代表が `maxNSubRecords` で指定した数よりも多く返される場合があります。これは既知の問題で、将来のバージョンで修正される予定です。
+
+
+#### `output` {#query-output}
+
+概要
+: 処理結果の出力形式を指定します。
+
+値
+: 出力形式を指定するハッシュ。 
+
+省略時の既定値
+: なし。
+
+指定を省略した場合、その検索クエリの検索結果はレスポンスには出力されません。
+集約操作などのために必要な中間テーブルにあたる検索結果を求めるだけの検索クエリにおいては、 `output` を省略して処理時間や転送するデータ量を減らすことができます。
+
+出力形式は、以下の形式のハッシュで指定します。
+
+    {
+      "elements"   : [<出力する情報の名前の配列>],
+      "format"     : "<検索結果のレコードの出力スタイル>",
+      "offset"     : <ページングの起点>,
+      "limit"      : <出力するレコード数>,
+      "attributes" : <レコードのカラムの出力指定の配列>
+    }
+
+`elements`
+: その検索クエリの結果として[レスポンス](#response)に出力する情報を、プロパティ名の文字列の配列で指定します。
+  以下の項目を指定できます。項目は1つだけ指定する場合であっても必ず配列で指定します。
+  
+   * `"startTime"`
+   * `"elapsedTime"`
+   * `"count"`
+   * `"attributes"`
+   * `"records"`
+  
+  このパラメータは省略可能で、省略時の初期値はありません(結果を何も出力しません)。
+
+`format`
+: 検索結果のレコードの出力スタイルを指定します。
+  以下のいずれかの値(文字列)を取ります。
+  
+   * `"simple"`  : 個々のレコードを配列として出力します。
+   * `"complex"` : 個々のレコードをハッシュとして出力します。
+  
+  このパラメータは省略可能で、省略時の初期値は `"simple"` です。
+
+`offset`
+: 出力するレコードのページングの起点を示す `0` または正の整数。
+  
+  このパラメータは省略可能で、省略時の既定値は `0` です。
+
+`limit`
+: 出力するレコード数を示す `-1` 、 `0` 、または正の整数。
+  `-1`を指定すると、すべてのレコードを出力します。
+  
+  このパラメータは省略可能で、省略時の既定値は `0` です。
+
+`attributes`
+: レコードのカラムの値について、出力形式を配列で指定します。
+  個々のカラムの値の出力形式は以下のいずれかで指定します。
+  
+   1. カラムの定義の配列。
+   2. カラムの定義を値としたハッシュ。
+  
+  各カラムは以下の形式のいずれかで指定します。
+  
+   * カラム名の文字列。例は以下の通りです。
+     * `"name"` : `name` カラムの値をそのまま `name` カラムとして出力します。
+     * `"age"`  : `age` カラムの値をそのまま `age` カラムとして出力します。
+   * 詳細な出力形式指定のハッシュ。例は以下の通りです。
+     * 以下の例は、 `name` カラムの値を `realName` カラムとして出力します。
+       
+           { "label" : "realName", "source" : "name" }
+       
+     * 以下の例は、 `name` カラムの値について、全文検索にヒットした位置を強調したHTMLコード片の文字列を `html` カラムとして出力します。
+       
+           { "label" : "html", "source": "snippet_html(name)" }
+       
+     * 以下の例は、`country` カラムについて、すべてのレコードの当該カラムの値が文字列 `"Japan"` であるものとして出力します。
+       (存在しないカラムを実際に作成する前にクライアント側の挙動を確認したい場合などに、この機能が利用できます。)
+       
+           { "label" : "country", "source" : "'Japan'" }
+       
+     * 以下の例は、集約前の元のレコードの総数を、集約後のレコードの `"itemsCount"` カラムの値として出力します。
+       
+           { "label" : "itemsCount", "source" : "_nsubrecs", }
+       
+     * 以下の例は、集約前の元のレコードの配列を、集約後のレコードの `"items"` カラムの値として出力します。
+       `"attributes"` は、この項の説明と同じ形式で指定します。
+       
+           { "label" : "items", "source" : "_subrecs",
+             "attributes": ["name", "price"] }
+  
+  カラムの定義の配列には、上記の形式で示されたカラムの定義を0個以上含めることができます。例:
+  
+      [
+        "name",
+        "age",
+        { "label" : "realName", "source" : "name" }
+      ]
+  
+  この場合、「`_key` のような特殊なカラムを除くすべてのカラム」を意味する特別なカラム名 `"*"`を使用できます。
+  
+    * `["*"]` と指定すると、(`_key` や `_id` 以外の)すべてのカラムがそのままの形で出力されます。
+    * `["_key", "*"]` と指定すると、 `_key` に続いてすべてのカラムがそのままの形で出力されます。
+    * `["*", "_nsubrecs"]` と指定すると、 すべてのカラムがそのままの形で出力された後に `_nsubrecs` が出力されます。
+  
+  カラムの定義を値としたハッシュでは、カラムの出力名をキー、上記の形式で示されたカラムの定義を値として、カラムの定義を0個以上含めることができます。例:
+  
+      {
+        "name"     : "name",
+        "age"      : "age",
+        "realName" : { "source" : "name" },
+        "country"  : { "source" : "'Japan'" }
+      }
+  
+  このパラメータは省略可能で、省略時の既定値はありません。カラムの指定がない場合、カラムの値は一切出力されません。
+
+
+## レスポンス {#response}
+
+このコマンドは、検索結果を`body` 、ステータスコード `200` を `statusCode` の値としたレスポンスを返します。
+
+検索結果のハッシュは、個々の検索クエリの名前をキー、対応する[個々の検索クエリ](#query-parameters)の処理結果を値とした、以下のような形式を取ります。
+
+    {
+      "<クエリ1の名前>" : {
+        "startTime"   : "<検索を開始した時刻>",
+        "elapsedTime" : <検索にかかった時間(単位:ミリ秒)),
+        "count"       : <指定された検索条件に該当するレコードの総数>,
+        "attributes"  : <出力されたレコードのカラムの情報の配列またはハッシュ>,
+        "records"     : [<出力されたレコードの配列>]
+      },
+      "<クエリ2の名前>" : { ... },
+      ...
+    }
+
+検索クエリの処理結果のハッシュは以下の項目を持つことができ、[検索クエリの `output`](#query-output) の `elements` で明示的に指定された項目のみが出力されます。
+
+### `startTime` {#response-query-startTime}
+
+検索を開始した時刻(ローカル時刻)の文字列です。
+
+形式は、[W3C-DTF](http://www.w3.org/TR/NOTE-datetime "Date and Time Formats")のタイムゾーンを含む形式となります。
+例えば以下の要領です。
+
+    2013-11-29T08:15:30+09:00
+
+### `elapsedTime` {#response-query-elapsedTime}
+
+検索にかかった時間の数値(単位:ミリ秒)です。
+
+### `count` {#response-query-count}
+
+検索条件に該当するレコードの総数の数値です。
+この値は、検索クエリの [`sortBy`](#query-sortBy) や [`output`](#query-output) における `offset` および `limit` の指定の影響を受けません。
+
+### `attributes` および `records` {#response-query-attributes-and-records}
+
+ * `attributes` は出力されたレコードのカラムの情報を示す配列またはハッシュです。
+ * `records` は出力されたレコードの配列です。
+
+`attributes` および `records` の出力形式は[検索クエリの `output`](#query-output) の `format` の指定に従って以下の2通りに別れます。
+
+#### 単純な形式のレスポンス {#response-query-simple-attributes-and-records}
+
+`format` が `"simple"` の場合、個々の検索クエリの結果は以下の形を取ります。
+
+    {
+      "startTime"   : "<検索を開始した時刻>",
+      "elapsedTime" : <検索にかかった時間),
+      "count"       : <検索結果のレコードの総数>,
+      "attributes"  : [
+        { "name"   : "<カラム1の名前>",
+          "type"   : "<カラム1の型>",
+          "vector" : <カラム1がベクターカラムかどうか> },
+        { "name"   : "<カラム2の名前>",
+          "type"   : "<カラム2の型>",
+          "vector" : <カラム2がベクターカラムかどうか> },
+        { "name"       : "<カラム3(サブレコードが存在する場合)の名前>"
+          "attributes" : [
+          { "name"   : "<カラム3-1のカラムの名前>",
+            "type"   : "<カラム3-1のカラムの型>",
+            "vector" : <カラム3-1がベクターカラムかどうか> },
+          { "name"   : "<カラム3-2のカラムの名前>",
+            "type"   : "<カラム3-2のカラムの型>",
+            "vector" : <カラム3-2がベクターカラムかどうか> },
+          ],
+          ...
+        },
+        ...
+      ],
+      "records"     : [
+        [<レコード1のカラム1の値>,
+         <レコード1のカラム2の値>,
+         [
+          [<レコード1のサブレコード1のカラム3-1の値>,
+           <レコード1のサブレコード1のカラム3-2の値>,
+           ...],
+          [<レコード1のサブレコード2のカラム3-1の値>,
+           <レコード1のサブレコード2のカラム3-2の値>,
+           ...],
+          ...],
+         ...],
+        [<レコード2のカラム1の値>,
+         <レコード2のカラム2の値>,
+         [
+          [<レコード2のサブレコード1のカラム3-1の値>,
+           <レコード2のサブレコード1のカラム3-2の値>,
+           ...],
+          [<レコード2のサブレコード2のカラム3-1の値>,
+           <レコード2のサブレコード2のカラム3-2の値>,
+           ...],
+          ...],
+         ...],
+        ...
+      ]
+    }
+
+これは、受け取ったデータの扱いやすさよりも、データの転送量を小さく抑える事を優先する出力形式です。
+大量のレコードを検索結果として受け取る場合や、多量のアクセスが想定される場合などに適しています。
+
+##### `attributes` {#response-query-simple-attributes}
+
+出力されたレコードのカラムについての情報の配列で、[検索クエリの `output`](#query-output) における `attributes` で指定された順番で個々のカラムの情報を含みます。
+
+個々のカラムの情報はハッシュの形をとり、その形式はレコードの値に応じて以下の3種類で与えられます。ハッシュのキーと値は以下のとおりです。
+
+###### 通常のカラム
+
+`name`
+: カラムの出力名の文字列です。[検索クエリの `output`](#query-output) における `attributes` の指定内容に基づきます。
+
+`type`
+: カラムの値の型を示す文字列です。
+  値は[Groonga のプリミティブなデータ型](http://groonga.org/ja/docs/reference/types.html)の名前か、もしくはテーブル名です。
+
+`vector`
+: カラムが[ベクター型](http://groonga.org/ja/docs/tutorial/data.html#vector-types)かどうかを示す真偽値です。
+  以下のいずれかの値をとります。
+  
+   * `true`  : カラムはベクター型である。
+   * `false` : カラムはベクター型ではない(スカラー型である)。
+
+###### サブレコードに対応するカラム
+
+`name`
+: カラムの出力名の文字列です。[検索クエリの `output`](#query-output) における `attributes` の指定内容に基づきます。
+
+サブレコードのカラム情報を含む配列です。この形式は主レコードの `attributes` と同様です。つまり `attribuets` は再帰的な構造になっています。
+
+###### 式
+
+`name`
+: カラムの出力名の文字列です。[検索クエリの `output`](#query-output) における `attributes` の指定内容に基づきます。
+
+##### `records` {#response-query-simple-records}
+
+出力されたレコードの配列です。
+
+個々のレコードは配列の形をとり、[検索クエリの `output`](#query-output) における `attributes` で指定された各カラムの値を同じ順番で含みます。
+
+[日時型](http://groonga.org/ja/docs/tutorial/data.html#date-and-time-type)のカラムの値は、[W3C-DTF](http://www.w3.org/TR/NOTE-datetime "Date and Time Formats")のタイムゾーンを含む形式の文字列として出力されます。
+
+#### 複雑な形式のレスポンス {#response-query-complex-attributes-and-records}
+
+`format` が `"complex"` の場合、個々の検索クエリの結果は以下の形を取ります。
+
+    {
+      "startTime"   : "<検索を開始した時刻>",
+      "elapsedTime" : <検索にかかった時間),
+      "count"       : <検索結果のレコードの総数>,
+      "attributes"  : {
+        "<カラム1の名前>" : { "type"   : "<カラム1の型>",
+                            "vector" : <カラム1がベクターカラムかどうか> },
+        "<カラム2の名前>" : { "type"   : "<カラム2の型>",
+                            "vector" : <カラム2がベクターカラムかどうか> },
+        "<カラム3(サブレコードが存在する場合)の名前>" : {
+          "attributes" : {
+            "<カラム3-1の名前>" : { "type"   : "<カラム3-1の型>",
+                                  "vector" : <カラム3-1がベクターカラムかどうか> },
+            "<カラム3-2の名前>" : { "type"   : "<カラム3-2の型>",
+                                  "vector" : <カラム3-2がベクターカラムかどうか> },
+            ...
+          }
+        },
+        ...
+      ],
+      "records"     : [
+        { "<カラム1の名前>" : <レコード1のカラム1の値>,
+          "<カラム2の名前>" : <レコード1のカラム2の値>,
+          "<カラム3の名前(サブレコードが存在する場合)>" : [
+            { "<カラム3-1の名前>" : <レコード1のサブレコード1のカラム3-1の値>,
+              "<カラム3-2の名前>" : <レコード1のサブレコード1のカラム3-2の値>,
+              ... },
+            { "<カラム3-1の名前>" : <レコード1のサブレコード2のカラム3-1の値>,
+              "<カラム3-2の名前>" : <レコード1のサブレコード2のカラム3-2の値>,
+              ... },
+            ...
+          ],
+          ...                                                                },
+        { "<カラム1の名前>" : <レコード2のカラム1の値>,
+          "<カラム2の名前>" : <レコード2のカラム2の値>,
+          "<カラム3の名前(サブレコードが存在する場合)>" : [
+            { "<カラム3-1の名前>" : <レコード2のサブレコード1のカラム3-1の値>,
+              "<カラム3-2の名前>" : <レコード2のサブレコード1のカラム3-2の値>,
+              ... },
+            { "<カラム3-1の名前>" : <レコード2のサブレコード2のカラム3-1の値>,
+              "<カラム3-2の名前>" : <レコード2のサブレコード2のカラム3-2の値>,
+              ... },
+            ...
+          ],
+          ...                                                                },
+        ...
+      ]
+    }
+
+これは、データの転送量を小さく抑える事よりも、受け取ったデータの扱いやすさを優先する出力形式です。
+検索結果の件数が小さい事があらかじめ分かっている場合や、管理機能などのそれほど多量のアクセスが見込まれない場合などに適しています。
+
+##### `attributes` {#response-query-complex-attributes}
+
+出力されたレコードのカラムについての情報を含むハッシュで、[検索クエリの `output`](#query-output) における `attributes` で指定された出力カラム名がキー、カラムの情報が値となります。
+
+個々のカラムの情報はハッシュの形をとり、その形式はレコードの値に応じて以下の3種類で与えられます。ハッシュのキーと値は以下のとおりです。
+
+###### 通常のカラム
+
+`type`
+: カラムの値の型を示す文字列です。
+  値は[Groonga のプリミティブなデータ型](http://groonga.org/ja/docs/reference/types.html)の名前か、もしくはテーブル名です。
+
+`vector`
+: カラムが[ベクター型](http://groonga.org/ja/docs/tutorial/data.html#vector-types)かどうかを示す真偽値です。
+  以下のいずれかの値をとります。
+  
+   * `true`  : カラムはベクター型である。
+   * `false` : カラムはベクター型ではない(スカラー型である)。
+
+###### サブレコードに対応するカラム
+
+サブレコードのカラム情報を含む配列です。この形式は主レコードの `attributes` と同様です。つまり `attribuets` は再帰的な構造になっています。
+
+###### 式
+
+キーはありません。空のハッシュ `{}` です。
+
+##### `records` {#response-query-complex-records}
+
+
+出力されたレコードの配列です。
+
+個々のレコードは、[検索クエリの `output`](#query-output) における `attributes` で指定された出力カラム名をキー、カラムの値を値としたハッシュとなります。
+
+[日時型](http://groonga.org/ja/docs/tutorial/data.html#date-and-time-type)のカラムの値は、[W3C-DTF](http://www.w3.org/TR/NOTE-datetime "Date and Time Formats")のタイムゾーンを含む形式の文字列として出力されます。
+
+
+## エラーの種類 {#errors}
+
+このコマンドは[一般的なエラー](/ja/reference/message/#error)に加えて、以下のエラーを場合に応じて返します。
+
+### `MissingSourceParameter`
+
+`source` の指定がないクエリがあることを示します。ステータスコードは `400` です。
+
+### `UnknownSource`
+
+`source` の値として、他のクエリの名前ではない、実際には存在しないテーブルの名前が指定されていることを示します。ステータスコードは `404` です。
+
+### `CyclicSource`
+
+`source` の循環参照があることを示します。ステータスコードは `400` です。
+
+### `SearchTimeout`
+
+`timeout` で指定された時間内に検索処理が完了しなかったことを示します。ステータスコードは `500` です。

  Added: ja/reference/1.1.0/commands/select/index.md (+139 -0) 100644
===================================================================
--- /dev/null
+++ ja/reference/1.1.0/commands/select/index.md    2014-11-30 23:20:40 +0900 (0864032)
@@ -0,0 +1,139 @@
+---
+title: select
+layout: ja
+---
+
+{% comment %}
+##############################################
+  THIS FILE IS AUTOMATICALLY GENERATED FROM
+  "_po/ja/reference/1.1.0/commands/select/index.po"
+  DO NOT EDIT THIS FILE MANUALLY!
+##############################################
+{% endcomment %}
+
+
+* TOC
+{:toc}
+
+## 概要 {#abstract}
+
+`select` は、テーブルから指定された条件にマッチするレコードを検索し、見つかったレコードを返却します。
+
+このコマンドは[Groonga の `select` コマンド](http://groonga.org/ja/docs/reference/commands/select.html)と互換性があります。
+
+## APIの形式 {#api-types}
+
+### HTTP {#api-types-http}
+
+リクエスト先
+: `(ドキュメントルート)/d/select`
+
+リクエストメソッド
+: `GET`
+
+リクエストのURLパラメータ
+: [パラメータの一覧](#parameters)で定義されている物を指定します。
+
+リクエストのbody
+: なし。
+
+レスポンスのbody
+: [レスポンスメッセージ](#response)。
+
+### REST {#api-types-rest}
+
+対応していません。
+
+### Fluentd {#api-types-fluentd}
+
+形式
+: Request-Response型。コマンドに対しては必ず対応するレスポンスが返されます。
+
+リクエストの `type`
+: `select`
+
+リクエストの `body`
+: [パラメータ](#parameters)のハッシュ。
+
+レスポンスの `type`
+: `select.result`
+
+## パラメータの構文 {#syntax}
+
+    {
+      "table"            : "<テーブル名>",
+      "match_columns"    : "<検索対象のカラム名のリストを'||'区切りで指定>",
+      "query"            : "<単純な検索条件>",
+      "filter"           : "<複雑な検索条件>",
+      "scorer"           : "<見つかったすべてのレコードに適用する式>",
+      "sortby"           : "<ソートキーにするカラム名のリストをカンマ(',')区切りで指定>",
+      "output_columns"   : "<L返却するカラム名のリストをカンマ(',')区切りで指定>",
+      "offset"           : <ページングの起点>,
+      "limit"            : <返却するレコード数>,
+      "drilldown"        : "<ドリルダウンするカラム名>",
+      "drilldown_sortby" : "ドリルダウン結果のソートキーにするカラム名のリストをカンマ(',')区切りで指定>",
+      "drilldown_output_columns" :
+                           "ドリルダウン結果として返却するカラム名のリストをカンマ(',')区切りで指定>",
+      "drilldown_offset" : <ドリルダウン結果のページングの起点>,
+      "drilldown_limit"  : <返却するドリルダウン結果のレコード数>,
+      "cache"            : "<クエリキャッシュの指定>",
+      "match_escalation_threshold":
+                           <検索方法をエスカレーションする閾値>,
+      "query_flags"      : "<queryパラメーターのカスタマイズ用フラグ>",
+      "query_expander"   : "<クエリー展開用の引数>"
+    }
+
+## パラメータの詳細 {#parameters}
+
+`table` 以外のパラメータはすべて省略可能です。
+
+また、バージョン {{ site.droonga_version }} の時点では以下のパラメータのみが動作します。
+これら以外のパラメータは未実装のため無視されます。
+
+ * `table`
+ * `match_columns`
+ * `query`
+ * `query_flags`
+ * `filter`
+ * `output_columns`
+ * `offset`
+ * `limit`
+ * `drilldown`
+ * `drilldown_output_columns`
+ * `drilldown_sortby`
+ * `drilldown_offset`
+ * `drilldown_limit`
+
+すべてのパラメータの意味は[Groonga の `select` コマンドの引数](http://groonga.org/ja/docs/reference/commands/select.html#parameters)と共通です。詳細はGroongaのコマンドリファレンスを参照して下さい。
+
+## レスポンス {#response}
+
+このコマンドは、レスポンスの `body` として検索結果の配列を返却します。
+
+    [
+      [
+        <Groonga's status code>,
+        <Start time>,
+        <Elapsed time>
+      ],
+      <List of columns>
+    ]
+
+検索結果の配列の構造は[Groonga の `select` コマンドの返り値](http://groonga.org/ja/docs/reference/commands/select.html#id6)と共通です。詳細はGroongaのコマンドリファレンスを参照して下さい。
+
+このコマンドはレスポンスの `statusCode` として常に `200` を返します。これは、Groonga互換コマンドのエラー情報はGroongaのそれと同じ形で処理される必要があるためです。
+
+レスポンスの `body` の詳細:
+
+ステータスコード
+: コマンドが正常に受け付けられたかどうかを示す整数値です。以下のいずれかの値をとります。
+  
+   * `0` (`Droonga::GroongaHandler::Status::SUCCESS`) : 正常に処理された。.
+   * `-22` (`Droonga::GroongaHandler::Status::INVALID_ARGUMENT`) : 引数が不正である。
+
+開始時刻
+: 処理を開始した時刻を示す数値(UNIX秒)。
+
+処理に要した時間
+: 処理を開始してから完了までの間にかかった時間を示す数値。
+

  Added: ja/reference/1.1.0/commands/system/index.md (+18 -0) 100644
===================================================================
--- /dev/null
+++ ja/reference/1.1.0/commands/system/index.md    2014-11-30 23:20:40 +0900 (572b294)
@@ -0,0 +1,18 @@
+---
+title: system
+layout: ja
+---
+
+{% comment %}
+##############################################
+  THIS FILE IS AUTOMATICALLY GENERATED FROM
+  "_po/ja/reference/1.1.0/commands/system/index.po"
+  DO NOT EDIT THIS FILE MANUALLY!
+##############################################
+{% endcomment %}
+
+
+`system` は、クラスタのシステム情報を取得するためのコマンド群のための名前空間です。
+
+ * [system.status](status/): クラスタのステータス情報を取得します。
+

  Added: ja/reference/1.1.0/commands/system/status/index.md (+115 -0) 100644
===================================================================
--- /dev/null
+++ ja/reference/1.1.0/commands/system/status/index.md    2014-11-30 23:20:40 +0900 (d4401cd)
@@ -0,0 +1,115 @@
+---
+title: system.status
+layout: ja
+---
+
+{% comment %}
+##############################################
+  THIS FILE IS AUTOMATICALLY GENERATED FROM
+  "_po/ja/reference/1.1.0/commands/system/status/index.po"
+  DO NOT EDIT THIS FILE MANUALLY!
+##############################################
+{% endcomment %}
+
+
+* TOC
+{:toc}
+
+## 概要 {#abstract}
+
+`system.status` コマンドは、クラスタの現在の状態を返します。
+
+## APIの形式 {#api-types}
+
+### HTTP {#api-types-http}
+
+リクエスト先
+: `(ドキュメントルート)/droonga/system/status`
+
+リクエストメソッド
+: `GET`
+
+リクエストのURLパラメータ
+: なし。
+
+リクエストのbody
+: なし。
+
+レスポンスのbody
+: [レスポンスメッセージ](#response)。
+
+### REST {#api-types-rest}
+
+対応していません。
+
+### Fluentd {#api-types-fluentd}
+
+形式
+: Request-Response型。コマンドに対しては必ず対応するレスポンスが返されます。
+
+リクエストの `type`
+: `system.status`
+
+リクエストの `body`
+: なし。
+
+レスポンスの `type`
+: `system.status.result`
+
+## パラメータの構文 {#syntax}
+
+このコマンドはパラメータを取りません。
+
+## 使い方 {#usage}
+
+このコマンドは各ノードの死活情報を出力します。
+例:
+
+    {
+      "type" : "system.status",
+      "body" : {}
+    }
+    
+    => {
+         "type" : "system.status.result",
+         "body" : {
+           "nodes": {
+             "192.168.0.10:10031/droonga": {
+               "live": true
+             },
+             "192.168.0.11:10031/droonga": {
+               "live": false
+             }
+           }
+         }
+       }
+
+
+## レスポンス {#response}
+
+このコマンドは以下のようなハッシュを `body` 、`200` を `statusCode` としたレスポンスを返します。以下はその一例です。。
+
+    {
+      "nodes" : {
+        "<Identifier of the node 1>" : {
+          "live" : <Vital status of the node>
+        },
+        "<Identifier of the node 2>" : { ... },
+        ...
+      }
+    }
+
+`nodes`
+: クラスタ内のノードの情報を含むハッシュ。
+  ハッシュのキーは、`catalog.json` で定義された各ノードの識別子(形式は `ホスト名:ポート番号/タグ`)です。
+  ハッシュの値は対応するノードのステータス情報を表し、以下の情報を含んでいます:
+  
+  `live`
+  : そのノードの死活状態を示す真偽値。
+    `true` であれば、そのノードはメッセージを処理する事ができ、他のノードもそのノード宛にメッセージを配送します。
+    それ以外の場合、そのノードはサービスが停止しているなどの理由によりメッセージを処理しません。
+
+
+## エラーの種類 {#errors}
+
+このコマンドは[一般的なエラー](/reference/message/#error)を返します。

  Added: ja/reference/1.1.0/commands/table-create/index.md (+111 -0) 100644
===================================================================
--- /dev/null
+++ ja/reference/1.1.0/commands/table-create/index.md    2014-11-30 23:20:40 +0900 (4189960)
@@ -0,0 +1,111 @@
+---
+title: table_create
+layout: ja
+---
+
+{% comment %}
+##############################################
+  THIS FILE IS AUTOMATICALLY GENERATED FROM
+  "_po/ja/reference/1.1.0/commands/table-create/index.po"
+  DO NOT EDIT THIS FILE MANUALLY!
+##############################################
+{% endcomment %}
+
+
+* TOC
+{:toc}
+
+## 概要 {#abstract}
+
+`table_create` は、新しいテーブルを作成します。
+
+このコマンドは[Groonga の `table_create` コマンド](http://groonga.org/ja/docs/reference/commands/table_create.html)と互換性があります。
+
+## APIの形式 {#api-types}
+
+### HTTP {#api-types-http}
+
+リクエスト先
+: `(ドキュメントルート)/d/table_create`
+
+リクエストメソッド
+: `GET`
+
+リクエストのURLパラメータ
+: [パラメータの一覧](#parameters)で定義されている物を指定します。
+
+リクエストのbody
+: なし。
+
+レスポンスのbody
+: [レスポンスメッセージ](#response)。
+
+### REST {#api-types-rest}
+
+対応していません。
+
+### Fluentd {#api-types-fluentd}
+
+形式
+: Request-Response型。コマンドに対しては必ず対応するレスポンスが返されます。
+
+リクエストの `type`
+: `table_create`
+
+リクエストの `body`
+: [パラメータ](#parameters)のハッシュ。
+
+レスポンスの `type`
+: `table_create.result`
+
+## パラメータの構文 {#syntax}
+
+    {
+      "name"              : "<テーブル名>",
+      "flags"             : "<テーブルの属性>",
+      "key_type"          : "<主キーの型>",
+      "value_type"        : "<値の型>",
+      "default_tokenizer" : "<既定のトークナイザー>",
+      "normalizer"        : "<ノーマライザー>"
+    }
+
+## パラメータの詳細 {#parameters}
+
+`name` 以外のパラメータはすべて省略可能です。
+
+すべてのパラメータは[Groonga の `table_create` コマンドの引数](http://groonga.org/ja/docs/reference/commands/table_create.html#parameters)と共通です。詳細はGroongaのコマンドリファレンスを参照して下さい。
+
+## レスポンス {#response}
+
+このコマンドは、レスポンスの `body` としてコマンドの実行結果に関する情報を格納した配列を返却します。
+
+    [
+      [
+        <Groongaのステータスコード>,
+        <開始時刻>,
+        <処理に要した時間>
+      ],
+      <テーブルが作成されたかどうか>
+    ]
+
+このコマンドはレスポンスの `statusCode` として常に `200` を返します。これは、Groonga互換コマンドのエラー情報はGroongaのそれと同じ形で処理される必要があるためです。
+
+レスポンスの `body` の詳細:
+
+ステータスコード
+: コマンドが正常に受け付けられたかどうかを示す整数値です。以下のいずれかの値をとります。
+  
+   * `0` (`Droonga::GroongaHandler::Status::SUCCESS`) : 正常に処理された。.
+   * `-22` (`Droonga::GroongaHandler::Status::INVALID_ARGUMENT`) : 引数が不正である。
+
+開始時刻
+: 処理を開始した時刻を示す数値(UNIX秒)。
+
+処理に要した時間
+: 処理を開始してから完了までの間にかかった時間を示す数値。
+
+テーブルが作成されたかどうか
+: テーブルが作成されたかどうかを示す真偽値です。以下のいずれかの値をとります。
+  
+   * `true`:テーブルを作成した。
+   * `false`:テーブルを作成しなかった。

  Added: ja/reference/1.1.0/commands/table-list/index.md (+91 -0) 100644
===================================================================
--- /dev/null
+++ ja/reference/1.1.0/commands/table-list/index.md    2014-11-30 23:20:40 +0900 (47c23c9)
@@ -0,0 +1,91 @@
+---
+title: table_list
+layout: ja
+---
+
+{% comment %}
+##############################################
+  THIS FILE IS AUTOMATICALLY GENERATED FROM
+  "_po/ja/reference/1.1.0/commands/table-list/index.po"
+  DO NOT EDIT THIS FILE MANUALLY!
+##############################################
+{% endcomment %}
+
+
+* TOC
+{:toc}
+
+## 概要 {#abstract}
+
+The `table_list` command reports the list of all existing tables in the dataset.
+
+This is compatible to [the `table_list` command of the Groonga](http://groonga.org/docs/reference/commands/table_list.html).
+
+## APIの形式 {#api-types}
+
+### HTTP {#api-types-http}
+
+リクエスト先
+: `(ドキュメントルート)/d/table_list`
+
+リクエストメソッド
+: `GET`
+
+リクエストのURLパラメータ
+: なし。
+
+リクエストのbody
+: なし。
+
+レスポンスのbody
+: [レスポンスメッセージ](#response)。
+
+### REST {#api-types-rest}
+
+対応していません。
+
+### Fluentd {#api-types-fluentd}
+
+形式
+: Request-Response型。コマンドに対しては必ず対応するレスポンスが返されます。
+
+リクエストの `type`
+: `table_list`
+
+リクエストの `body`
+: `null` または空のハッシュ。
+
+レスポンスの `type`
+: `table_list.result`
+
+## レスポンス {#response}
+
+This returns an array including list of tables as the response's `body`.
+
+    [
+      [
+        <Groonga's status code>,
+        <Start time>,
+        <Elapsed time>
+      ],
+      <List of tables>
+    ]
+
+The structure of the returned array is compatible to [the returned value of the Groonga's `table_list` command](http://groonga.org/docs/reference/commands/table_list.html#id5). See the linked document for more details.
+
+このコマンドはレスポンスの `statusCode` として常に `200` を返します。これは、Groonga互換コマンドのエラー情報はGroongaのそれと同じ形で処理される必要があるためです。
+
+レスポンスの `body` の詳細:
+
+ステータスコード
+: コマンドが正常に受け付けられたかどうかを示す整数値です。以下のいずれかの値をとります。
+  
+   * `0` (`Droonga::GroongaHandler::Status::SUCCESS`) : 正常に処理された。.
+   * `-22` (`Droonga::GroongaHandler::Status::INVALID_ARGUMENT`) : 引数が不正である。
+
+開始時刻
+: 処理を開始した時刻を示す数値(UNIX秒)。
+
+処理に要した時間
+: 処理を開始してから完了までの間にかかった時間を示す数値。
+

  Added: ja/reference/1.1.0/commands/table-remove/index.md (+106 -0) 100644
===================================================================
--- /dev/null
+++ ja/reference/1.1.0/commands/table-remove/index.md    2014-11-30 23:20:40 +0900 (7d1ac6c)
@@ -0,0 +1,106 @@
+---
+title: table_remove
+layout: ja
+---
+
+{% comment %}
+##############################################
+  THIS FILE IS AUTOMATICALLY GENERATED FROM
+  "_po/ja/reference/1.1.0/commands/table-remove/index.po"
+  DO NOT EDIT THIS FILE MANUALLY!
+##############################################
+{% endcomment %}
+
+
+* TOC
+{:toc}
+
+## 概要 {#abstract}
+
+`table_remove` は、既存のテーブルを1つ削除します。
+
+このコマンドは[Groonga の `table_remove` コマンド](http://groonga.org/ja/docs/reference/commands/table_remove.html)と互換性があります。
+
+## APIの形式 {#api-types}
+
+### HTTP {#api-types-http}
+
+リクエスト先
+: `(ドキュメントルート)/d/table_remove`
+
+リクエストメソッド
+: `GET`
+
+リクエストのURLパラメータ
+: [パラメータの一覧](#parameters)で定義されている物を指定します。
+
+リクエストのbody
+: なし。
+
+レスポンスのbody
+: [レスポンスメッセージ](#response)。
+
+### REST {#api-types-rest}
+
+対応していません。
+
+### Fluentd {#api-types-fluentd}
+
+形式
+: Request-Response型。コマンドに対しては必ず対応するレスポンスが返されます。
+
+リクエストの `type`
+: `table_remove`
+
+リクエストの `body`
+: [パラメータ](#parameters)のハッシュ。
+
+レスポンスの `type`
+: `table_remove.result`
+
+## パラメータの構文 {#syntax}
+
+    {
+      "name" : "<テーブル名>"
+    }
+
+## パラメータの詳細 {#parameters}
+
+唯一のパラメータとなる `name` は省略不可能です。
+
+すべてのパラメータは[Groonga の `table_remove` コマンドの引数](http://groonga.org/ja/docs/reference/commands/table_remove.html#parameters)と共通です。詳細はGroongaのコマンドリファレンスを参照して下さい。
+
+## レスポンス {#response}
+
+このコマンドは、レスポンスの `body` としてコマンドの実行結果に関する情報を格納した配列を返却します。
+
+    [
+      [
+        <Groongaのステータスコード>,
+        <開始時刻>,
+        <処理に要した時間>
+      ],
+      <テーブルが削除されたかどうか>
+    ]
+
+このコマンドはレスポンスの `statusCode` として常に `200` を返します。これは、Groonga互換コマンドのエラー情報はGroongaのそれと同じ形で処理される必要があるためです。
+
+レスポンスの `body` の詳細:
+
+ステータスコード
+: コマンドが正常に受け付けられたかどうかを示す整数値です。以下のいずれかの値をとります。
+  
+   * `0` (`Droonga::GroongaHandler::Status::SUCCESS`) : 正常に処理された。.
+   * `-22` (`Droonga::GroongaHandler::Status::INVALID_ARGUMENT`) : 引数が不正である。
+
+開始時刻
+: 処理を開始した時刻を示す数値(UNIX秒)。
+
+処理に要した時間
+: 処理を開始してから完了までの間にかかった時間を示す数値。
+
+テーブルが削除されたかどうか
+: テーブルが削除されたかどうかを示す真偽値です。以下のいずれかの値をとります。
+  
+   * `true`:テーブルを削除した。
+   * `false`:テーブルを削除しなかった。

  Added: ja/reference/1.1.0/http-server/index.md (+164 -0) 100644
===================================================================
--- /dev/null
+++ ja/reference/1.1.0/http-server/index.md    2014-11-30 23:20:40 +0900 (2e3e293)
@@ -0,0 +1,164 @@
+---
+title: HTTPサーバ
+layout: ja
+---
+
+{% comment %}
+##############################################
+  THIS FILE IS AUTOMATICALLY GENERATED FROM
+  "_po/ja/reference/1.1.0/http-server/index.po"
+  DO NOT EDIT THIS FILE MANUALLY!
+##############################################
+{% endcomment %}
+
+
+* TOC
+{:toc}
+
+## 概要 {#abstract}
+
+[Droonga HTTP Server][droonga-http-server]は、Droonga Engine用のHTTP Protocol Adapterです。
+
+Droonga Engineはfluentdプロトコルにのみ対応しているため、Droonga Engineとの通信には`fluent-cat`などを使う必要があります。
+このアプリケーションは、Droonga EngineとHTTP経由で通信するための機能を提供します。
+
+## インストール {#install}
+
+Droonga HTTP Serverは、[Node.js][] 用の[droonga-http-server npmモジュール][droonga-http-server npm module]として提供されています。
+以下のように、`npm`コマンドでインストールすることができます:
+
+    # npm install -g droonga-http-server
+
+## 使い方 {#usage}
+
+### コマンドラインオプション {#usage-command}
+
+Droonga HTTP Serverは、HTTPサーバを起動するための`droonga-http-server`コマンドを含んでいます。
+以下のようにコマンドラインオプションを指定して起動できます:
+
+    # droonga-http-server --port 3003
+
+指定可能なオプションと既定値は以下の通りです:
+
+`--port <13000>`
+: HTTPリクエストを受け付けるポート番号。
+
+`--receive-host-name <127.0.0.1>`
+: HTTPサーバが動作するコンピュータ自身のホスト名(またはIPアドレス)。
+  Droonga EngineからProtocol Adapterへレスポンスのメッセージを送出する際の宛先に使われます。
+
+`--droonga-engine-host-name <127.0.0.1>`
+: Droonga Engineが動作するコンピュータのホスト名(またはIPアドレス)。
+
+`--droonga-engine-port <24224>`
+: Droonga Engineがメッセージを受け付けるポートの番号。
+
+`--default-dataset <Droonga>`
+: 既定のデータセット名。
+  組み込みのHTTP APIから発行されるリクエストに使われます。
+
+`--tag <droonga>`
+: Droonga Engineに送るfluentdメッセージに使われます。
+
+`--enable-logging`
+: このオプションを指定した場合、ログが標準出力に出力されるようになります。
+
+`--cache-size <100>`
+: LRUレスポンスキャッシュの最大サイズ。
+  Droonga HTTP ServerはすべてのGETリクエストについて、レスポンスをここで指定した件数までメモリ上にキャッシュします。
+
+コマンドラインオプションには、組み合わせるDroonga Engineに合わせた値を適切に指定する必要があります。例えば、HTTPサーバが192.168.10.90のコンピュータ上で動作し、Droonga Engineが192.168.10.100のコンピュータ上で以下の設定を伴って動作する時:
+
+fluentd.conf:
+
+    <source>
+      type forward
+      port 24324
+    </source>
+    <match books.message>
+      name localhost:24224/books
+      type droonga
+    </match>
+    <match output.message>
+      type stdout
+    </match>
+
+catalog.json:
+
+    {
+      "version": 2,
+      "effectiveDate": "2013-09-01T00:00:00Z",
+      "datasets": {
+        "Books": {
+          ...
+        }
+      }
+    }
+
+この時、192.168.10.90のコンピュータ上でHTTPサーバを起動する際のコマンドラインオプションは以下のようになります:
+
+    # droonga-http-server --receive-host-name 192.168.10.90 \
+                          --droonga-engine-host-name 192.168.10.100 \
+                          --droonga-engine-port 24324 \
+                          --default-dataset Books \
+                          --tag books
+
+[基本のチュートリアル][basic tutorial]も併せて参照して下さい。
+
+## 組み込みのAPI {#usage-api}
+
+Droonga HTTP Serverは以下のAPIを含んでいます:
+
+### REST API {#usage-rest}
+
+#### `GET /tables/<テーブル名>` {#usage-rest-get-tables-table}
+
+単純な[検索リクエスト](../commands/search/)を発行します。
+リクエストの[`source`](../commands/search/#query-source)は、パス中で指定されたテーブル名となります。
+指定できるクエリパラメータは以下の通りです:
+
+`attributes`
+: [`output.attributes`](../commands/search/#query-output)に対応。
+  値はカンマ区切りのリストです。例:`attributes=_key,name,age`.
+
+`query`
+: [`condition.*.query`](../commands/search/#query-condition-query-syntax-hash)に対応。
+  値はクエリ文字列です。
+
+`match_to`
+: [`condition.*.matchTo`](../commands/search/#query-condition-query-syntax-hash)に対応。
+  値はカンマ区切りのリストです。例:`match_to=_key,name`.
+
+`match_escalation_threshold`
+: [`condition.*.matchEscalationThreshold`](../commands/search/#query-condition-query-syntax-hash)に対応。
+  値は整数です。
+
+`script`
+: [`condition`](../commands/search/#query-condition-query-syntax-hash)におけるスクリプト形式の指定に対応。もし`query`と両方同時に指定した場合には、両者の`and`条件と見なされます。
+
+`adjusters`
+: `adjusters`に対応します。
+
+`sort_by`
+: [`sortBy`](../commands/search/#query-sortBy)に対応します。
+  値はカラム名の文字列です。
+
+`limit`
+: [`output.limit`](../commands/search/#query-output)に対応。
+  値は整数です。
+
+`offset`
+: [`output.offset`](../commands/search/#query-output)に対応。
+  値は整数です。
+
+### Groonga HTTPサーバ互換API {#usage-groonga}
+
+#### `GET /d/<コマンド名>` {#usage-groonga-d}
+
+(未稿)
+
+
+  [basic tutorial]: ../../tutorial/basic/
+  [droonga-http-server]: https://github.com/droonga/droonga-http-server
+  [droonga-http-server npm module]: https://npmjs.org/package/droonga-http-server
+  [Node.js]: http://nodejs.org/

  Added: ja/reference/1.1.0/index.md (+28 -0) 100644
===================================================================
--- /dev/null
+++ ja/reference/1.1.0/index.md    2014-11-30 23:20:40 +0900 (ee684f2)
@@ -0,0 +1,28 @@
+---
+title: リファレンスマニュアル
+layout: ja
+---
+
+{% comment %}
+##############################################
+  THIS FILE IS AUTOMATICALLY GENERATED FROM
+  "_po/ja/reference/1.1.0/index.po"
+  DO NOT EDIT THIS FILE MANUALLY!
+##############################################
+{% endcomment %}
+
+
+[カタログの仕様](catalog/)
+: Droonga Engineの振る舞いを定義する`catalog.json`の詳細。
+
+[メッセージの形式](message/)
+: Droonga Engine内を流れるメッセージの形式の詳細。
+
+[コマンドリファレンス](commands/)
+: Droonga Engineで利用可能な組み込みのコマンドの詳細。
+
+[HTTPサーバ](http-server/)
+: [droonga-http-server](https://github.com/droonga/droonga-http-server)の使用方法。
+
+[プラグイン開発](plugin/)
+: Droonga Engine用の独自のプラグインを開発するための公開APIの詳細。

  Added: ja/reference/1.1.0/message/index.md (+215 -0) 100644
===================================================================
--- /dev/null
+++ ja/reference/1.1.0/message/index.md    2014-11-30 23:20:40 +0900 (4c5d626)
@@ -0,0 +1,215 @@
+---
+title: メッセージ形式
+layout: ja
+---
+
+{% comment %}
+##############################################
+  THIS FILE IS AUTOMATICALLY GENERATED FROM
+  "_po/ja/reference/1.1.0/message/index.po"
+  DO NOT EDIT THIS FILE MANUALLY!
+##############################################
+{% endcomment %}
+
+
+* TOC
+{:toc}
+
+
+## リクエスト {#request}
+
+リクエストのメッセージの基本的な形式は以下の通りです。
+
+    {
+      "id"      : "<メッセージの識別子>",
+      "type"    : "<メッセージの種類>",
+      "replyTo" : "<レスポンスの受信者へのパス>",
+      "dataset" : "<対象データセット名>",
+      "body"    : <メッセージ本文>
+    }
+
+### `id` {#request-id}
+
+概要
+: そのメッセージの一意な識別子。
+
+値
+: 識別子となる文字列。一意でさえあれば、どんな形式のどんな文字列でも指定できます。値は対応するレスポンスの['inReplyTo`](#response-inReplyTo)に使われます。
+
+省略時の既定値
+: なし。この情報は省略できません。
+
+### `type` {#request-type}
+
+概要
+: そのメッセージの種類。
+
+値
+: [コマンド](/ja/reference/commands/)の名前の文字列
+
+省略時の既定値
+: なし。この情報は省略できません。
+
+### `replyTo` {#request-replyTo}
+
+概要
+: レスポンスの受信者へのパス。
+
+値
+: `<ホスト>:<ポート番号>/<タグ名>` で示されたパス文字列。例:`localhost:24224/output`.
+
+省略時の既定値
+: なし。この情報は省略可能で、省略した場合はレスポンスのメッセージは単に捨てられます。
+
+### `dataset` {#request-dataset}
+
+概要
+: 対象となるデータセット。
+
+値
+: データセット名の文字列。
+
+省略時の既定値
+: なし。この情報は省略できません。
+
+### `body` {#request-body}
+
+概要
+: メッセージの本文。
+
+値
+: オブジェクト、文字列、数値、真偽値、または `null`。
+
+省略時の既定値
+: なし。この情報は省略可能です。
+
+## レスポンス {#response}
+
+レスポンスのメッセージの基本的な形式は以下の通りです。
+
+    {
+      "type"       : "<メッセージの種類>",
+      "inReplyTo"  : "<対応するリクエストメッセージの識別子>",
+      "statusCode" : <ステータスコード>,
+      "body"       : <メッセージの本文>,
+      "errors"     : <ノードから返されたエラー>
+    }
+
+### `type` {#response-type}
+
+概要
+: そのメッセージの種類。
+
+値
+: メッセージの種類を示す文字列。多くの場合は、元のリクエストメッセージの `type` の値に `.result` という接尾辞を伴った文字列です。
+
+### `inReplyTo` {#response-inReplyTo}
+
+概要
+: 対応するリクエストメッセージの識別子。
+
+値
+: 対応するリクエストメッセージの識別子の文字列 related request message.
+
+### `statusCode` {#response-statusCode}
+
+概要
+: そのメッセージの種類。
+
+値
+: ステータスコードを示す整数。
+
+レスポンスのステータスコードはHTTPのステータスコードに似ています。
+
+`200` およびその他の `2xx` のステータス
+: コマンドが正常に処理されたことを示します。
+
+### `body` {#response-body}
+
+概要
+: そのリクエストメッセージの処理結果の情報。
+
+値
+: オブジェクト、文字列、数値、真偽値、または `null`。
+
+### `errors` {#response-errors}
+
+概要
+: 各ノードから返されたすべてのエラー。
+
+値
+: オブジェクト。
+
+この情報は、コマンドが複数のボリュームに分散して処理された時にのみ現れます。それ以外の場合、レスポンスメッセージは `errors` フィールドを含みません。詳細は[エラーレスポンスの説明](#error)を参照して下さい。
+
+## エラーレスポンス {#error}
+
+コマンドの中にはエラーを返す物があります。
+
+エラーレスポンスは通常のレスポンスと同じ `type` を伴って返されますが、通常のレスポンスとは異なる `statusCode` と `body` を持ちます。大まかなエラーの種類は `statusCode` で示され、詳細な情報は `body` の内容として返されます。
+
+コマンドが複数のボリュームに分散して処理されて、各ボリュームがエラーを返した場合、レスポンスメッセージは `errors` フィールドを持ちます。各ボリュームから返されたエラーは以下のように保持されます:
+
+    {
+      "type"       : "add.result",
+      "inReplyTo"  : "...",
+      "statusCode" : 400,
+      "body"       : {
+        "name":    "UnknownTable",
+        "message": ...
+      },
+      "errors"     : {
+        "/path/to/the/node1" : {
+          "statusCode" : 400,
+          "body"       : {
+            "name":    "UnknownTable",
+            "message": ...
+          }
+        },
+        "/path/to/the/node2" : {
+          "statusCode" : 400,
+          "body"       : {
+            "name":    "UnknownTable",
+            "message": ...
+          }
+        }
+      }
+    }
+
+このような場合、すべてのエラーの中で代表的な1つがメッセージの `body` に出力されます。
+
+
+### エラーレスポンスのステータスコード {#error-status}
+
+エラーレスポンスのステータスコードはHTTPのステータスコードに似ています。
+
+`400` およびその他の `4xx` のステータス
+: リクエストのメッセージが原因でのエラーであることを示します。
+
+`500` およびその他の `5xx` のステータス
+: Droonga Engine内部のエラーであることを示します。
+
+### エラーレスポンスの `body` {#error-body}
+
+エラーレスポンスの `body` の基本的な形式は以下の通りです。
+
+    {
+      "name"    : "<エラーの種類>",
+      "message" : "<人間が読みやすい形式で示されたエラーの詳細>",
+      "detail"  : <任意の形式の、追加のエラー情報>
+    }
+
+追加の情報がない場合、 `detail` は出力されないことがあります。
+
+#### エラーの種類 {#error-type}
+
+すべてのコマンドに共通するエラーとして、以下の物があります。
+
+`MissingDatasetParameter`
+: `dataset` の指定がないことを示します。ステータスコードは `400` です。
+
+`UnknownDataset`
+: 指定されたデータセットが存在しないことを示します。ステータスコードは `404` です。
+
+`UnknownType`
+: `type` に指定されたコマンドを処理するハンドラが存在しない、未知のコマンドであることを示します。ステータスコードは `400` です。

  Added: ja/reference/1.1.0/plugin/adapter/index.md (+317 -0) 100644
===================================================================
--- /dev/null
+++ ja/reference/1.1.0/plugin/adapter/index.md    2014-11-30 23:20:40 +0900 (a9d2b56)
@@ -0,0 +1,317 @@
+---
+title: 適合フェーズでのプラグインAPI
+layout: ja
+---
+
+{% comment %}
+##############################################
+  THIS FILE IS AUTOMATICALLY GENERATED FROM
+  "_po/ja/reference/1.1.0/plugin/adapter/index.po"
+  DO NOT EDIT THIS FILE MANUALLY!
+##############################################
+{% endcomment %}
+
+
+* TOC
+{:toc}
+
+
+## 概要 {#abstract}
+
+各々のDroonga Engineプラグインは、それ自身のための*アダプター*を持つことができます。適合フェーズでは、アダプターは入力メッセージ(Protocol AdapterからDroonga Engineへ送られてきたリクエストに相当)と出力メッセージ(Droonga EngineからProtocol Adapterへ送られるレスポンスに相当)の両方について変更を加えることができます。
+
+
+### アダプターの定義の仕方 {#howto-define}
+
+例えば、「foo」という名前のプラグインにアダプターを定義する場合は以下のようにします:
+
+~~~ruby
+require "droonga/plugin"
+
+module Droonga::Plugins::FooPlugin
+  extend Plugin
+  register("foo")
+
+  class Adapter < Droonga::Adapter
+    # このアダプターを設定するための操作
+    XXXXXX = XXXXXX
+
+    def adapt_input(input_message)
+      # 入力メッセージを変更するための操作
+      input_message.XXXXXX = XXXXXX
+    end
+
+    def adapt_output(output_message)
+      # 出力メッセージを変更するための操作
+      output_message.XXXXXX = XXXXXX
+    end
+  end
+end
+~~~
+
+アダプターを定義するための手順は以下の通りです:
+
+ 1. プラグイン用のモジュール(例:`Droonga::Plugins::FooPlugin`)を定義し、プラグインとして登録する。(必須)
+ 2. [`Droonga::Adapter`](#classes-Droonga-Adapter)を継承したアダプタークラス(例:`Droonga::Plugins::FooPlugin::Adapter`)を定義する。(必須)
+ 3. [アダプターを適用する条件を設定する](#howto-configure)。(必須)
+ 4. 入力メッセージに対する変更操作を[`#adapt_input`](#classes-Droonga-Adapter-adapt_input)として定義する。(任意)
+ 5. 出力メッセージに対する変更操作を[`#adapt_output`](#classes-Droonga-Adapter-adapt_output)として定義する。(任意)
+
+[プラグイン開発のチュートリアル](../../../tutorial/plugin-development/adapter/)も参照して下さい。
+
+
+### アダプターはどのように操作するか {#how-works}
+
+アダプターは以下のように動作します:
+
+ 1. Droonga Engineが起動する。
+    * アダプタークラス(例:`Droonga::Plugins::FooPlugin::Adapter`)の唯一のインスタンスが作られ、登録される。
+      * 入力のマッチングパターンおよび出力のマッチングパターンが登録される。
+    * Droonga Engineが起動し、入力メッセージを待ち受ける。
+ 2. 入力メッセージがProtocol AdapterからDroonga Engineへ送られてくる。
+    この時点で(入力メッセージ用の)適合フェーズが開始される。
+    * そのメッセージが[入力のマッチングパターン](#config)にマッチするアダプターについて、アダプターの[`#adapt_input`](#classes-Droonga-Adapter-adapt_input)が呼ばれる。
+    * このメソッドは、[入力メッセージ自身が持つメソッド](#classes-Droonga-InputMessage)を通じて入力メッセージを変更することができる。
+ 3. すべてのアダプターが適用された時点で、入力メッセージ用の適合フェーズが終了し、メッセージが次の立案フェーズに送られる。
+ 4. 出力メッセージが前の集約フェーズから送られてくる。
+    この時点で(出力メッセージ用の)適合フェーズが開始される。
+    * そのメッセージ外貨の両方の条件を満たす場合に、アダプターの[`#adapt_output`](#classes-Droonga-Adapter-adapt_output)が呼ばれる:
+      - そのメッセージが、そのアダプター自身によって処理された入力メッセージに起因した物である。
+      - そのメッセージが、アダプターの[出力のマッチングパターン](#config)にマッチする。
+    * このメソッドは、[出力メッセージ自身が持つメソッド](#classes-Droonga-OutputMessage)を通じて出力メッセージを変更することができる。
+ 5. すべてのアダプターが適用された時点で、出力メッセージ用の適合フェーズが終了し、メッセージがProtocol Adapterに送られる。
+
+上記の通り、Droonga Engineは各プラグインのアダプタークラスについて、インスタンスを全体で1つだけ生成します。
+対になった入力メッセージと出力メッセージのための状態を示す情報をアダプター自身のインスタンス変数として保持してはいけません。
+代わりに、状態を示す情報を入力メッセージのbodyの一部として埋め込み、対応する出力メッセージのbodyから取り出すようにして下さい。
+
+アダプター内で発生したすべてのエラーは、Droonga Engine自身によって処理されます。[エラー処理][error handling]も併せて参照して下さい。
+
+
+## 設定 {#config}
+
+`input_message.pattern` ([マッチングパターン][matching pattern], 省略可能, 初期値=`nil`)
+: 入力メッセージに対する[マッチングパターン][matching pattern]。
+  パターンが指定されていない(もしくは`nil`が指定された)場合は、すべてのメッセージがマッチします。
+
+`output_message.pattern` ([マッチングパターン][matching pattern], 省略可能, 初期値=`nil`)
+: 出力メッセージに対する[マッチングパターン][matching pattern]。
+  パターンが指定されていない(もしくは`nil`が指定された)場合は、すべてのメッセージがマッチします。
+
+## クラスとメソッド {#classes}
+
+### `Droonga::Adapter` {#classes-Droonga-Adapter}
+
+これはすべてのアダプターに共通の基底クラスです。独自プラグインのアダプタークラスは、このクラスを継承する必要があります。
+
+#### `#adapt_input(input_message)` {#classes-Droonga-Adapter-adapt_input}
+
+このメソッドは、[`Droonga::InputMessage`](#classes-Droonga-InputMessage)でラップされた入力メッセージを受け取ります。
+入力メッセージは、メソッドを通じて内容を変更することができます。
+
+この基底クラスにおいて、このメソッドは何もしない単なるプレースホルダとして定義されています。
+入力メッセージを変更するには、以下のようにメソッドを再定義して下さい:
+
+~~~ruby
+module Droonga::Plugins::QueryFixer
+  class Adapter < Droonga::Adapter
+    def adapt_input(input_message)
+      input_message.body["query"] = "fixed query"
+    end
+  end
+end
+~~~
+
+#### `#adapt_output(output_message)` {#classes-Droonga-Adapter-adapt_output}
+
+このメソッドは、[`Droonga::OutputMessage`](#classes-Droonga-OutputMessage)でラップされた出力メッセージを受け取ります。
+出力メッセージは、メソッドを通じて内容を変更することができます。
+
+この基底クラスにおいて、このメソッドは何もしない単なるプレースホルダとして定義されています。
+出力メッセージを変更するには、以下のようにメソッドを再定義して下さい:
+
+~~~ruby
+module Droonga::Plugins::ErrorConcealer
+  class Adapter < Droonga::Adapter
+    def adapt_output(output_message)
+      output_message.status_code = Droonga::StatusCode::OK
+    end
+  end
+end
+~~~
+
+### `Droonga::InputMessage` {#classes-Droonga-InputMessage}
+
+#### `#type`, `#type=(type)` {#classes-Droonga-InputMessage-type}
+
+入力メッセージの`"type"`の値を返します。
+
+以下のように、新しい文字列値を代入することで値を変更できます:
+
+~~~ruby
+module Droonga::Plugins::MySearch
+  class Adapter < Droonga::Adapter
+    input_message.pattern = ["type", :equal, "my-search"]
+
+    def adapt_input(input_message)
+      p input_message.type
+      # => "my-search"
+      #    このメッセージは「my-search」というメッセージタイプに
+      #    対応したプラグインによって処理される。
+
+      input_message.type = "search"
+
+      p input_message.type
+      # => "search"
+      #    メッセージタイプが変更された。
+      #    このメッセージはsearchプラグインによって、
+      #    通常の検索リクエストとして処理される。
+    end
+  end
+end
+~~~
+
+#### `#body`, `#body=(body)` {#classes-Droonga-InputMessage-body}
+
+入力メッセージの`"body"`の値を返します。
+
+以下のように、新しい値を代入したり部分的に値を代入したりすることで、値を変更することができます:
+
+~~~ruby
+module Droonga::Plugins::MinimumLimit
+  class Adapter < Droonga::Adapter
+    input_message.pattern = ["type", :equal, "search"]
+
+    MAXIMUM_LIMIT = 10
+
+    def adapt_input(input_message)
+      input_message.body["queries"].each do |name, query|
+        query["output"] ||= {}
+        query["output"]["limit"] ||= MAXIMUM_LIMIT
+        query["output"]["limit"] = [query["output"]["limit"], MAXIMUM_LIMIT].min
+      end
+      # この時点で、すべての検索クエリが"output.limit=10"の指定を持っている。
+    end
+  end
+end
+~~~
+
+別の例:
+
+~~~ruby
+module Droonga::Plugins::MySearch
+  class Adapter < Droonga::Adapter
+    input_message.pattern = ["type", :equal, "my-search"]
+
+    def adapt_input(input_message)
+      # 独自形式のメッセージからクエリ文字列を取り出す。
+      query_string = input_message["body"]["query"]
+
+      # "search"型の内部的な検索リクエストを組み立てる。
+      input_message.type = "search"
+      input_message.body = {
+        "queries" => {
+          "source"    => "Store",
+          "condition" => {
+            "query"   => query_string,
+            "matchTo" => ["name"],
+          },
+          "output" => {
+            "elements" => ["records"],
+            "limit"    => 10,
+          },
+        },
+      }
+      # この時点で、"type"と"body"は両方とも完全に置き換えられている。
+    end
+  end
+end
+~~~
+
+### `Droonga::OutputMessage` {#classes-Droonga-OutputMessage}
+
+#### `#status_code`, `#status_code=(status_code)` {#classes-Droonga-OutputMessage-status_code}
+
+出力メッセージの`"statusCode"`の値を返します。
+
+以下のように、新しいステータスコードを代入することで値を変更できます: 
+
+~~~ruby
+module Droonga::Plugins::ErrorConcealer
+  class Adapter < Droonga::Adapter
+    input_message.pattern = ["type", :equal, "search"]
+
+    def adapt_output(output_message)
+      unless output_message.status_code == StatusCode::InternalServerError
+        output_message.status_code = Droonga::StatusCode::OK
+        output_message.body = {}
+        output_message.errors = nil
+        # この時点で、内部的なサーバーエラーはすべて無視されるため
+        # クライアントは通常のレスポンスを受け取る事になる。
+      end
+    end
+  end
+end
+~~~
+
+#### `#errors`, `#errors=(errors)` {#classes-Droonga-OutputMessage-errors}
+
+出力メッセージの`"errors"`の値を返します。
+
+以下のように、新しいエラー情報を代入したり値を部分的に書き換えたりする事ができます:
+
+~~~ruby
+module Droonga::Plugins::ErrorExporter
+  class Adapter < Droonga::Adapter
+    input_message.pattern = ["type", :equal, "search"]
+
+    def adapt_output(output_message)
+      output_message.errors.delete(secret_database)
+      # 秘密のデータベースからのエラー情報を削除する。
+
+      output_message.body["errors"] = {
+        "records" => output_message.errors.collect do |database, error|
+          {
+            "database" => database,
+            "error" => error
+          }
+        end,
+      }
+      # エラー情報を、"error"という名前の擬似的な検索結果に変換する。
+    end
+  end
+end
+~~~
+
+#### `#body`, `#body=(body)` {#classes-Droonga-OutputMessage-body}
+
+出力メッセージの`"body"`の値を返します。
+
+以下のように、新しい値を代入したり部分的に値を代入したりすることで、値を変更することができます:
+
+~~~ruby
+module Droonga::Plugins::SponsoredSearch
+  class Adapter < Droonga::Adapter
+    input_message.pattern = ["type", :equal, "search"]
+
+    def adapt_output(output_message)
+      output_message.body.each do |name, result|
+        next unless result["records"]
+        result["records"].unshift(sponsored_entry)
+      end
+      # これにより、すべての検索結果が広告エントリを含むようになる。
+    end
+
+    def sponsored_entry
+      {
+        "title"=> "SALE!",
+        "url"=>   "http://..."
+      }
+    end
+  end
+end
+~~~
+
+
+  [matching pattern]: ../matching-pattern/
+  [error handling]: ../error/

  Added: ja/reference/1.1.0/plugin/collector/index.md (+58 -0) 100644
===================================================================
--- /dev/null
+++ ja/reference/1.1.0/plugin/collector/index.md    2014-11-30 23:20:40 +0900 (043dbf0)
@@ -0,0 +1,58 @@
+---
+title: コレクター
+layout: ja
+---
+
+{% comment %}
+##############################################
+  THIS FILE IS AUTOMATICALLY GENERATED FROM
+  "_po/ja/reference/1.1.0/plugin/collector/index.po"
+  DO NOT EDIT THIS FILE MANUALLY!
+##############################################
+{% endcomment %}
+
+
+* TOC
+{:toc}
+
+
+## 概要 {#abstract}
+
+コレクターは、2つの入力値を1つの値に結合します。
+Droonga Engineは3つ以上の値に対しても、指定されたコレクターを繰り返し適用することによって、それらを1つの値にします。
+
+## 組み込みのコレクタークラス {#builtin-collectors}
+
+組み込みのプラグインによって使われている、定義済みのコレクタークラスがいくつかあります。
+これらは当然ですが、自作プラグインからも利用することができます。
+
+### `Droonga::Collectors::And`
+
+`and` 論理演算子によって2つの値を比較した結果を返します。
+両方の値が論理的に真である場合、どちらかの値が返されます(どちらが返されるかは不定です)。
+
+`null` (`nil`) および `false` は論理的に偽として扱われ、それ以外の場合はすべて真として扱われます。
+
+### `Droonga::Collectors::Or`
+
+`or` 論理演算子によって2つの値を比較した結果を返します。
+片方の値だけが論理的に真である場合、その値が返り値となります。
+そうでなく2つの値が論理的に等しい場合は、どちらかの値が返されます(どちらが返されるかは不定です)。
+
+`null` (`nil`) および `false` は論理的に偽として扱われ、それ以外の場合はすべて真として扱われます。
+
+### `Droonga::Collectors::Sum`
+
+2つの値のまとめた結果を返します。
+
+このコレクターは若干複雑な動作をします。
+
+ * 片方の値が `null` (`nil`) である場合、もう片方の値を返します。
+ * 両方の値がハッシュである場合、ハッシュの結合結果を値として返します。
+   * 結果のハッシュは、2つのハッシュが持つキーのすべてを持ちます。
+     両方のハッシュでキーが重複した場合、重複したキーの値はどちらかのハッシュの値となります。
+   * 重複するキーの値についてどちらのハッシュの値が使われるかは不定です。
+ * それ以外の場合は、 `a + b` の結果を値として返します。
+   * 両方ともの値が配列または文字列であった場合、それらを連結した結果を値として返します。
+     どちらの値が左辺になるかは不定です。
+

  Added: ja/reference/1.1.0/plugin/error/index.md (+70 -0) 100644
===================================================================
--- /dev/null
+++ ja/reference/1.1.0/plugin/error/index.md    2014-11-30 23:20:40 +0900 (df19b7e)
@@ -0,0 +1,70 @@
+---
+title: プラグインでのエラーの扱い
+layout: ja
+---
+
+{% comment %}
+##############################################
+  THIS FILE IS AUTOMATICALLY GENERATED FROM
+  "_po/ja/reference/1.1.0/plugin/error/index.po"
+  DO NOT EDIT THIS FILE MANUALLY!
+##############################################
+{% endcomment %}
+
+
+* TOC
+{:toc}
+
+
+## 概要 {#abstract}
+
+プラグイン内部で発生した例外のうち、そのプラグイン自身によって補足されなかった物は、すべて、入力メッセージに対する[エラーレスポンス][error response]として返されます。この時のエラーレスポンスのステータスコードは`500`(Internal Errorを意味する)です。
+
+整形されたエラー情報を返したい場合は、低レベルのエラーを捕捉した上で、`Droonga::ErrorMessage::BadRequest`または`Droonga::ErrorMessage::InternalServerError`を継承したカスタムエラークラスでラップして再度`raise`して下さい。
+(ちなみに、これらの基底クラスはプラグインの名前空間に初期状態で`include`されているため、エラークラスの定義時には単に`class CustomError < BadRequest`などと書くだけで参照できます。)
+
+
+## 組み込みのエラークラス {#builtin-errors}
+
+組み込みのプラグインやDroonga Engine自身によってあらかじめ定義されているエラークラスとしては、以下の物があります。
+
+### `Droonga::ErrorMessage::NotFound`
+
+データセットまたは指定された情報ソースの中に、探している情報が見つからなかったことを示す。例:
+
+    # 第2引数はエラーの詳細な情報。(省略可能)
+    raise Droonga::NotFound.new("#{name} is not found!", :elapsed_time => elapsed_time)
+
+### `Droonga::ErrorMessage::BadRequest`
+
+文法エラーやバリデーションエラーなど、入力メッセージ自体にエラーが含まれていたことを示す。例:
+
+    # 第2引数はエラーの詳細な情報。(省略可能)
+    raise Droonga::NotFound.new("Syntax error in #{query}!", :detail => detail)
+
+### `Droonga::ErrorMessage::InternalServerError`
+
+タイムアウト、ファイル入出力のエラーなど、その他の未知のエラーであることを示す。例:
+
+    # 第2引数はエラーの詳細な情報。(省略可能)
+    raise Droonga::MessageProcessingError.new("busy!", :elapsed_time => elapsed_time)
+
+
+## 組み込みのステータスコード {#builtin-status-codes}
+
+エラーのステータスコードとしては、以下のステータスコードか、もしくは[慣習に従ったステータスコード](../../message/#error-status)を使用します。
+
+`Droonga::StatusCode::OK`
+: `200`と等価。
+
+`Droonga::StatusCode::NOT_FOUND`
+: `404`と等価。
+
+`Droonga::StatusCode::BAD_REQUEST`
+: `400`と等価。
+
+`Droonga::StatusCode::INTERNAL_ERROR`
+: `500`と等価。
+
+
+  [error response]: ../../message/#error

  Added: ja/reference/1.1.0/plugin/handler/index.md (+230 -0) 100644
===================================================================
--- /dev/null
+++ ja/reference/1.1.0/plugin/handler/index.md    2014-11-30 23:20:40 +0900 (a773852)
@@ -0,0 +1,230 @@
+---
+title: ハンドリング・フェーズでのプラグインAPI
+layout: ja
+---
+
+{% comment %}
+##############################################
+  THIS FILE IS AUTOMATICALLY GENERATED FROM
+  "_po/ja/reference/1.1.0/plugin/handler/index.po"
+  DO NOT EDIT THIS FILE MANUALLY!
+##############################################
+{% endcomment %}
+
+
+* TOC
+{:toc}
+
+
+## 概要 {#abstract}
+
+各々のDroonga Engineプラグインは、それ自身のための*ハンドラー*を持つことができます。ハンドリング・フェーズでは、ハンドラーはリクエストを処理して結果を返すことができます。
+
+
+### ハンドラーの定義の仕方 {#howto-define}
+
+例えば、「foo」という名前のプラグインにハンドラーを定義する場合は以下のようにします:
+
+~~~ruby
+require "droonga/plugin"
+
+module Droonga::Plugins::FooPlugin
+  extend Plugin
+  register("foo")
+
+  define_single_step do |step|
+    step.name = "foo"
+    step.handler = :Handler
+    step.collector = Collectors::And
+  end
+
+  class Handler < Droonga::Handler
+    def handle(message)
+      # リクエストを処理するための操作
+    end
+  end
+end
+~~~
+
+ハンドラーを定義するための手順は以下の通りです:
+
+ 1. プラグイン用のモジュール(例:`Droonga::Plugins::FooPlugin`)を定義し、プラグインとして登録する。(必須)
+ 2. [`Droonga::SingleStepDefinition`](#class-Droonga-SingleStepDefinition)を使い、実装しようとしているハンドラーに対応する「single step」を定義する。(必須)
+ 3. [`Droonga::Handler`](#classes-Droonga-Handler)を継承したハンドラークラス(例:`Droonga::Plugins::FooPlugin::Handler`)を定義する。(必須)
+ 4. リクエストを処理する操作を[`#handle`](#classes-Droonga-Handler-handle)として定義する。(任意)
+
+
+[プラグイン開発チュートリアル](../../../tutorial/plugin-development/handler/)も併せて参照して下さい。
+
+
+### ハンドラーはどのように操作するか {#how-works}
+
+ハンドラーは以下のように動作します:
+
+ 1. Droonga Engineが起動する。
+    * stepとハンドラークラスが登録される。
+    * Droonga Engineが起動し、入力メッセージを待ち受ける。
+ 2. 適合フェーズからメッセージが転送されてくる。
+    この時点で処理フェーズが開始される。
+    * Droonga Engineが、メッセージタイプからstepの定義を見つける。
+    * Droonga Engineが、登録済みの定義に従ってsingle stepを作成する。
+    * single stepが、登録済みのハンドラークラスのインスタンスを作成する。
+      この時点でハンドリング・フェーズが開始される。
+      * ハンドラーの[`#handle`](#classes-Droonga-Handler-handle)メソッドが、リクエストの情報を含むタスクメッセージを伴って呼ばれる。
+        * このメソッドにより、入力メッセージを任意に処理することができる。
+        * このメソッドは、処理結果の出力を戻り値として返す。
+      * ハンドラーの処理が完了した時点で、そのタスクメッセージ(およびリクエスト)のハンドリング・フェーズが終了する。
+    * メッセージタイプからstepが見つからなかった場合は、何も処理されない。
+    * すべてのstepが処理を終えた時点で、そのリクエストに対する処理フェーズが終了する。
+
+上記の通り、Droonga Engineは各リクエストに対してその都度ハンドラークラスのインスタンスを生成します。
+
+ハンドラー内で発生したすべてのエラーは、Droonga Engine自身によって処理されます。[エラー処理][error handling]も併せて参照して下さい。
+
+
+## 設定 {#config}
+
+`action.synchronous` (真偽値, 省略可能, 初期値=`false`)
+: リクエストを同期的に処理する必要があるかどうかを示す。
+  例えば、テーブル内に新しいカラムを追加するリクエストは、テーブルが存在しない場合には必ず、テーブル作成用のリクエストの後で処理する必要がある。このような場合のハンドラーは、 `action.synchronous = true` の指定を伴うことになる。
+
+
+## クラスとメソッド {#classes}
+
+### `Droonga::SingleStepDefinition` {#classes-Droonga-SingleStepDefinition}
+
+このクラスは、ハンドラーに対応するstepの詳細を記述する機能を提供します。
+
+#### `#name`, `#name=(name)` {#classes-Droonga-SingleStepDefinition-name}
+
+step自身の名前を記述します。値は文字列です。
+
+Droonga Engineは、メッセージの`type`に一致する`name`を持つstepが存在する場合に、入力メッセージをコマンドのリクエストとして扱います。
+言い換えると、このメソッドはstepに対応するコマンドの名前を定義します。
+
+
+#### `#handler`, `#handler=(handler)` {#classes-Droonga-SingleStepDefinition-handler}
+
+特定のハンドラークラスをstepに紐付けます。
+ハンドラークラスは以下のいずれかの方法で指定します:
+
+ * `Handler` や `Droonga::Plugins::FooPlugin::Handler` のような、ハンドラークラス自体への参照。
+   当然ながら、参照先のクラスはその時点で定義済みでなければなりません。
+ * `:Handler`のような、その名前空間で定義されているハンドラークラスのクラス名のシンボル。
+   この記法は、stepを先に記述して後からハンドラークラスを定義する場合に有用です。
+ * `"Droonga::Plugins::FooPlugin::Handler"` のような、ハンドラークラスのクラスパス文字列。
+   この記法もまた、stepの後でハンドラークラスを定義する場合に有用です。
+
+ハンドラークラスをシンボルまたは文字列で指定した場合、参照先のクラスは、Droonga Engineが実際にそのstepを処理する時点までの間に定義しておく必要があります。
+Droonga Engineがハンドラークラスの実体を見つけられなかった場合、またはハンドラークラスが未指定の場合には、Droonga Engineはそのリクエストに対して何も処理を行いません。
+
+#### `#collector`, `#collector=(collector)` {#classes-Droonga-SingleStepDefinition-collector}
+
+特定のコレクタークラスをstepに紐付けます。
+コレクタークラスは以下のいずれかの方法で指定します:
+
+ * `Collectors::Something` や `Droonga::Plugins::FooPlugin::MyCollector` のような、コレクタークラス自体への参照。
+   当然ながら、参照先のクラスはその時点で定義済みでなければなりません。
+ * `:MyCollector`のような、その名前空間で定義されているコレクタークラスのクラス名のシンボル。
+   この記法は、stepを先に記述して後からコレクタークラスを定義する場合に有用です。
+ * `"Droonga::Plugins::FooPlugin::MyCollector"` のような、コレクタークラスのクラスパス文字列。
+   この記法もまた、stepの後でコレクタークラスを定義する場合に有用です。
+
+コレクタークラスをシンボルまたは文字列で指定した場合、参照先のクラスは、Droonga Engineが実際にそのstepの結果を集約する時点までの間に定義しておく必要があります。
+Droonga Engineがコレクタークラスの実体を見つけられなかった場合、またはコレクタークラスが未指定の場合には、Droonga Engineは処理結果を集約せず、複数のメッセージとして返します。
+
+[コレクターの説明][collector]も併せて参照して下さい。
+
+#### `#write`, `#write=(write)` {#classes-Droonga-SingleStepDefinition-write}
+
+stepがストレージ内の情報を変更し得るかどうかを記述します。
+リクエストがストレージ内のデータを変更することを意図する物である場合、そのリクエストはすべてのreplicaで処理される必要があります。
+それ以外の場合、Droonga Engineは結果をキャッシュしたり、CPUやメモリの使用量を削減するなどして、処理を最適化することができます。
+
+取り得る値:
+
+ * `true`: そのstepではストレージの内容が変更される可能性がある事を示す。
+ * `false`: そのstepではストレージの内容が変更される可能性はない事を示す。(初期値)"
+
+#### `#inputs`, `#inputs=(inputs)` {#classes-Droonga-SingleStepDefinition-inputs}
+
+(未稿)
+
+#### `#output`, `#output=(output)` {#classes-Droonga-SingleStepDefinition-output}
+
+(未稿)
+
+### `Droonga::Handler` {#classes-Droonga-Handler}
+
+これはすべてのハンドラーに共通の基底クラスです。独自プラグインのハンドラークラスは、このクラスを継承する必要があります。
+
+#### `#handle(message)` {#classes-Droonga-Handler-handle}
+
+このメソッドは、[`Droonga::HandlerMessage`](#classes-Droonga-HandlerMessage)でラップされたタスクメッセージを受け取ります。
+プラグインは、このタスクメッセージのメソッドからリクエストの情報を読み取る事ができます。
+
+この基底クラスにおいて、このメソッドは何もしない単なるプレースホルダとして定義されています。
+メッセージを処理するには、以下のようにメソッドを再定義して下さい:
+
+~~~ruby
+module Droonga::Plugins::MySearch
+  class Handler < Droonga::Handler
+    def handle(message)
+      search_query = message.request["body"]["query"]
+      ...
+      { ... } # the result
+    end
+  end
+end
+~~~
+
+Droonga Engineは、このメソッドの戻り値を処理の結果として扱います。
+結果の値は、レスポンスのbodyの組み立てに使われ、Protocol Adapterに送られます。
+
+
+### `Droonga::HandlerMessage` {#classes-Droonga-HandlerMessage}
+
+このクラスはタスクメッセージに対するラッパーとして働きます。
+
+Droonga Engineは送られてきたリクエストのメッセージを解析し、そのリクエストを処理するための複数のタスクメッセージを作成します。
+1つのタスクメッセージは、リクエストの実体、step、後続するタスクの一覧などの情報を持ちます。
+
+#### `#request` {#classes-Droonga-HandlerMessage-request}
+
+このメソッドはリクエストメッセージを返します。例:
+
+~~~ruby
+module Droonga::Plugins::MySearch
+  class Handler < Droonga::Handler
+    def handle(message)
+      request = message.request
+      search_query = request["body"]["query"]
+      ...
+    end
+  end
+end
+~~~
+
+#### `@context` {#classes-Droonga-HandlerMessage-context}
+
+対応するボリュームのストレージを示す、`Groonga::Context`のインスタンスへの参照。
+[Rroongaのクラスリファレンス][Groonga::Context]も併せて参照して下さい
+
+`@context`を経由して、Rroongaのすべての機能を利用できます。
+例えば、以下は指定されたテーブルのすべてのレコードの数を返す例です:
+
+~~~ruby
+module Droonga::Plugins::CountRecords
+  class Handler < Droonga::Handler
+    def handle(message)
+      request = message.request
+      table_name = request["body"]["table"]
+      count = @context[table_name].size
+    end
+  end
+end
+~~~
+
+  [error handling]: ../error/
+  [collector]: ../collector/
+  [Groonga::Context]: http://ranguba.org/rroonga/en/Groonga/Context.html

  Added: ja/reference/1.1.0/plugin/index.md (+21 -0) 100644
===================================================================
--- /dev/null
+++ ja/reference/1.1.0/plugin/index.md    2014-11-30 23:20:40 +0900 (7d12caf)
@@ -0,0 +1,21 @@
+---
+title: プラグイン開発
+layout: ja
+---
+
+{% comment %}
+##############################################
+  THIS FILE IS AUTOMATICALLY GENERATED FROM
+  "_po/ja/reference/1.1.0/plugin/index.po"
+  DO NOT EDIT THIS FILE MANUALLY!
+##############################################
+{% endcomment %}
+
+
+Droonga Engineはプラグインに対して、処理の各段階ごとに異なるAPIセットを提供します。[プラグイン開発のチュートリアル](../../tutorial/plugin-development/)も参照してください。
+
+ * [適合フェーズでのAPI](adapter/)
+ * [ハンドリング・フェーズでのAPI](handler/)
+ * [メッセージのためのマッチングパターン](matching-pattern/)
+ * [コレクター](collector/)
+ * [エラー処理](error/)

  Added: ja/reference/1.1.0/plugin/matching-pattern/index.md (+242 -0) 100644
===================================================================
--- /dev/null
+++ ja/reference/1.1.0/plugin/matching-pattern/index.md    2014-11-30 23:20:40 +0900 (06a403f)
@@ -0,0 +1,242 @@
+---
+title: メッセージのマッチングパターン
+layout: ja
+---
+
+{% comment %}
+##############################################
+  THIS FILE IS AUTOMATICALLY GENERATED FROM
+  "_po/ja/reference/1.1.0/plugin/matching-pattern/index.po"
+  DO NOT EDIT THIS FILE MANUALLY!
+##############################################
+{% endcomment %}
+
+
+* TOC
+{:toc}
+
+
+## 概要 {#abstract}
+
+Droonga Engineはメッセージのパターンを指定するための小規模な言語を実装しています。これを*マッチングパターン*と呼びます。
+マッチングパターンは、プラグインなどの様々な場所で処理対象のメッセージを指定するために使われます。
+
+
+## 例 {#examples}
+
+### 単純なマッチング
+
+    pattern = ["type", :equal, "search"]
+
+これは以下のようなメッセージにマッチします:
+
+    {
+      "type": "search",
+      ...
+    }
+
+### 深い位置にある対象へのマッチング
+
+    pattern = ["body.success", :equal, true]
+
+これは以下のようなメッセージにマッチします:
+
+    {
+      "type": "add.result",
+      "body": {
+        "success": true
+      }
+    }
+
+以下にはマッチしません:
+
+    {
+      "type": "add.result",
+      "body": {
+        "success": false
+      }
+    }
+
+### パターン自体のネスト
+
+    pattern = [
+                 ["type", :equal, "table_create"],
+                 :or,
+                 ["body.success", :equal, true]
+              ]
+
+これは以下の両方にマッチします:
+
+    {
+      "type": "table_create",
+      ...
+    }
+
+および:
+
+    {
+      "type": "column_create",
+      ...
+      "body": {
+        "success": true
+      }
+    }
+
+
+## 書式 {#syntax}
+
+マッチングパターンには「基本パターン」と「ネストしたパターン」の2種類があります。
+
+### 基本パターン {#syntax-basic}
+
+#### 構造 {#syntax-basic-structure}
+
+基本パターンは以下のように、2つ以上の要素を含む配列として表現されます:
+
+    ["type", :equal, "search"]
+
+ * 最初の要素は *ターゲットパス* です。これは、[メッセージ][message]の中でチェックされる情報の位置を示します。
+ * 2番目の要素は *演算子* です。これは、ターゲットパスで示された情報をどのようにチェックするかを示します。
+ * 3番目の要素は *演算子のための引数* です。これは、プリミティブ値(文字列、数値、または真偽値)、もしくはそれらの値の配列です。ただし、いくつかの演算子は引数を取りません。
+
+#### ターゲットパス {#syntax-basic-target-path}
+
+ターゲットパスは以下の文字列のような形で示します:
+
+    "body.success"
+
+Droonga Engineのマッチング機構は、これをドットで区切られた *パスコンポーネント* のリストとして解釈します。
+1つのパスコンポーネントはメッセージ中の同名のプロパティを表します。
+よって、上記の例は以下の位置を示します:
+
+    {
+      "body": {
+        "success": <target>
+      }
+    }
+
+
+
+
+#### 利用可能な演算子 {#syntax-basic-operators}
+
+演算子はシンボルとして指定します。
+
+`:equal`
+: ターゲットの値が与えられた値と等しい場合に `true` を返します。それ以外の場合は `false` を返します。
+  例えば、
+  
+      ["type", :equal, "search"]
+  
+  上記のパターンは以下のようなメッセージにマッチします:
+  
+      {
+        "type": "search",
+        ...
+      }
+
+`:in`
+: ターゲットの値が与えられた配列の中に含まれている場合に `true` を返します。それ以外の場合は `false` を返します。
+  例えば、
+  
+      ["type", :in, ["search", "select"]]
+  
+  上記のパターンは以下のようなメッセージにマッチします:
+  
+      {
+        "type": "select",
+        ...
+      }
+  
+  以下にはマッチしません:
+  
+      {
+        "type": "find",
+        ...
+      }
+
+`:include`
+: ターゲットの値の配列の中に指定された値が含まれている場合に `true` を返します。それ以外の場合は `false` を返します。
+  言い換えると、これは `:in` 演算子の反対の働きをします。
+  例えば、
+  
+      ["body.tags", :include, "News"]
+  
+  上記のパターンは以下のようなメッセージにマッチします:
+  
+      {
+        "type": "my.notification",
+        "body": {
+          "tags": ["News", "Groonga", "Droonga", "Fluentd"]
+        }
+      }
+
+`:exist`
+: ターゲットに指定された情報が存在する場合は `true` を返します。それ以外の場合は `false` を返します。
+  例えば、
+  
+      ["body.comments", :exist, "News"]
+  
+  上記のパターンは以下のようなメッセージにマッチします:
+  
+      {
+        "type": "my.notification",
+        "body": {
+          "title": "Hello!",
+          "comments": []
+        }
+      }
+  
+  以下にはマッチしません:
+  
+      {
+        "type": "my.notification",
+        "body": {
+          "title": "Hello!"
+        }
+      }
+
+`:start_with`
+: ターゲットの文字列が指定された文字列で始まる場合に `true` を返します。それ以外の場合は `false` を返します。
+  例えば、
+  
+      ["body.path", :start_with, "/archive/"]
+  
+  上記のパターンは以下のようなメッセージにマッチします:
+  
+      {
+        "type": "my.notification",
+        "body": {
+          "path": "/archive/2014/02/28.html"
+        }
+      }
+
+
+### ネストしたパターン {#syntax-nested}
+
+#### 構造 {#syntax-nested-structure}
+
+ネストしたパターンは、以下のような3つの要素を持つ配列として表現されます:
+
+    [
+      ["type", :equal, "table_create"],
+      :or,
+      ["type", :equal, "column_create"]
+    ]
+
+ * 最初の要素と最後の要素は基本パターンまたはネストしたパターンです。(言い換えると、ネストしたパターンは再帰的に書くことができます。)
+ * 2番目の要素は *論理演算子* です。
+
+#### 利用可能な演算子 {#syntax-nested-operators}
+
+`:and`
+: 与えられた両方のパターンが `true` を返す場合に、`true` を返します。それ以外の場合は `false` を返します。
+
+`:or`
+: 与えられたパターン(1番目または3番目の要素)のいずれかまたは両方が `true` を返す場合に `true` を返します。それ以外の場合は `false` を返します。
+
+
+
+
+  [message]:../../message/
+

  Added: ja/tutorial/1.1.0/add-replica/index.md (+389 -0) 100644
===================================================================
--- /dev/null
+++ ja/tutorial/1.1.0/add-replica/index.md    2014-11-30 23:20:40 +0900 (93cc17f)
@@ -0,0 +1,389 @@
+---
+title: "Droongaチュートリアル: 既存クラスタへのreplicaの追加"
+layout: ja
+---
+
+{% comment %}
+##############################################
+  THIS FILE IS AUTOMATICALLY GENERATED FROM
+  "_po/ja/tutorial/1.1.0/add-replica/index.po"
+  DO NOT EDIT THIS FILE MANUALLY!
+##############################################
+{% endcomment %}
+
+
+* TOC
+{:toc}
+
+## チュートリアルのゴール
+
+既存の[Droonga][]クラスタについて、新しいreplicaを追加し、既存のreplicaを削除し、および、既存のreplicaを新しいreplicaで置き換えるための手順を学ぶこと。
+
+## 前提条件
+
+* 何らかのデータが格納されている状態の[Droonga][]クラスタがあること。
+  このチュートリアルを始める前に、[「使ってみる」のチュートリアル](../groonga/)を完了しておいて下さい。
+* 複数のクラスタの間でのデータの複製方法を把握していること。
+  このチュートリアルを始める前に、[バックアップと復元のチュートリアル](../dump-restore/)を完了しておいて下さい。
+
+このチュートリアルでは、[最初のチュートリアル](../groonga/)で準備した2つの既存のDroongaノード:`node0` (`192.168.100.50`) 、 `node1` (`192.168.100.51`) と、新しいノードとして使うもう1台のコンピュータ `node2` (`192.168.100.52`) があると仮定します。
+あなたの手元にあるDroongaノードがこれとは異なる名前である場合には、以下の説明の中の`node0`、`node1`、`node2`は実際の物に読み替えて下さい。
+
+## 「replica」とは?
+
+Droongaのノードの集合には、「replica」と「slice」という2つの軸があります。
+
+「replica」のノード群は、完全に同一のデータを持っており、検索などのリクエストを各ノードが並行して処理する事ができます
+新しいreplicaを追加する事によって、増加するリクエストに対して処理能力を増強することができます。
+
+他方、「slice」のノード群はそれぞれ異なるデータを持ちます(例えば、あるノードは2013年のデータ、別のノードは2014年のデータ、という具合です)。
+新しいsliceを追加する事によって、増大するデータ量に対してクラスタとしての容量を拡大することができます。
+
+現在の所、Groonga互換のシステムとして設定されたDroongaクラスタについては、replicaを追加することはできますが、sliceを追加することはできません。
+この点については将来のバージョンで改善する予定です。
+
+ともかく、このチュートリアルでは既存のDroongaクラスタに新しいreplicaを追加する手順を解説します。
+早速始めましょう。
+
+## 既存のクラスタに新しいreplicaノードを追加する
+
+このケースでは、検索のように読み取りのみを行うリクエストに対しては、クラスタの動作を止める必要はありません。
+サービスを停止することなく、その裏側でシームレスに新しいreplicaを追加することができます。
+
+その一方で、クラスタへの新しいデータの流入は、新しいノードが動作を始めるまでの間停止しておく必要があります。
+(将来的には、新しいノードを完全に無停止で追加できるようにする予定ですが、今のところはそれはできません。)
+
+ここでは、`node0` と `node1` の2つのreplicaノードからなるDroongaクラスタがあり、新しいreplicaノードとして `node2` を追加すると仮定します。
+
+### 新しいノードをセットアップする
+
+まず、新しいコンピュータをセットアップし、必要なソフトウェアのインストールと設定を済ませます。
+
+~~~
+(on node2)
+# curl https://raw.githubusercontent.com/droonga/droonga-engine/master/install.sh | \
+    HOST=node2 bash
+# curl https://raw.githubusercontent.com/droonga/droonga-http-server/master/install.sh | \
+    ENGINE_HOST=node2 HOST=node2 bash
+~~~
+
+注意点として、空でないノードを既存のクラスタに追加することはできません。
+もしそのコンピュータがかつてDroongaノードとして使われていた事があった場合には、最初に古いデータを消去する必要があります。
+
+~~~
+(on node2)
+# droonga-engine-configure --quiet \
+                           --clear --reset-config --reset-catalog \
+                           --host=node2
+# droonga-http-server-configure --quiet --reset-config \
+                                --droonga-engine-host-name=node2 \
+                                --receive-host-name=node2
+~~~
+
+では、サービスを起動しましょう。
+
+~~~
+(on node2)
+# service droonga-engine start
+# service droonga-http-server start
+~~~
+
+この時点で、この新しいノードは既存のクラスタのノードとしては動作していません。
+この事は、`system.status`コマンドを通じて確認できます:
+
+~~~
+$ curl "http://node0:10041/droonga/system/status" | jq "."
+{
+  "nodes": {
+    "node0:10031/droonga": {
+      "live": true
+    },
+    "node1:10031/droonga": {
+      "live": true
+    }
+  }
+}
+$ curl "http://node1:10041/droonga/system/status" | jq "."
+{
+  "nodes": {
+    "node0:10031/droonga": {
+      "live": true
+    },
+    "node1:10031/droonga": {
+      "live": true
+    }
+  }
+}
+$ curl "http://node2:10041/droonga/system/status" | jq "."
+{
+  "nodes": {
+    "node2:10031/droonga": {
+      "live": true
+    }
+  }
+}
+~~~
+
+### 書き込みを伴うリクエストの流入を一時的に停止する
+
+新しいreplicaとの間でデータを完全に同期する必要があるので、クラスタの構成を変更する前に、クラスタへのデータの書き込みを行うリクエストの流入を一時停止する必要があります。
+そうしないと、新しく追加したreplicaが中途半端なデータしか持たない矛盾した状態となってしまい、リクエストに対してクラスタが返す処理結果が不安定になります。
+
+データの書き込みを伴うリクエストとは、具体的には、クラスタ内のデータを変更する以下のコマンドです:
+
+ * `add`
+ * `column_create`
+ * `column_remove`
+ * `delete`
+ * `load`
+ * `table_create`
+ * `table_remove`
+
+cronjobとして実行されるバッチスクリプトによって `load` コマンド経由で新しいデータを投入している場合は、cronjobを停止して下さい。
+クローラが `add` コマンド経由で新しいデータを投入している場合は、クローラを停止して下さい。
+あるいは、クローラやローダーとクラスタの間にFluentdを置いてバッファとして利用しているのであれば、バッファからのメッセージ出力を停止して下さい。 
+
+[前項](../dump-restore/)から順番にチュートリアルを読んでいる場合、クラスタに流入しているリクエストはありませんので、ここでは特に何もする必要はありません。
+
+### 新しいreplicaをクラスタに参加させる
+
+新しいreplicaノードを既存のクラスタに追加するには、いずれかの既存のノードもしくは新しいreplicaノードのいずれかにおいて、`catalog.json` が置かれているディレクトリで、`droonga-engine-join` コマンドを実行します:
+
+~~~
+(on node2)
+$ droonga-engine-join --host=node2 \
+                      --replica-source-host=node0 \
+                      --receiver-host=node2
+Start to join a new node node2
+       to the cluster of node0
+                     via node2 (this host)"
+
+Joining new replica to the cluster...
+...
+Update existing hosts in the cluster...
+...
+Done.
+~~~
+
+このコマンドは、以下のようにして別のノード上で実行することもできます:
+
+~~~
+(on node1)
+$ droonga-engine-join --host=node2 \
+                      --replica-source-host=node0 \
+                      --receiver-host=node1
+Start to join a new node node2
+       to the cluster of node0
+                     via node1 (this host)"
+~~~
+
+ * `--host` オプションで、その新しいreplicaノードのホスト名(またはIPアドレス)を指定して下さい。
+ * `--replica-source-host` オプションで、クラスタ中の既存のノードの1つのホスト名(またはIPアドレス)を指定して下さい。
+ * `--receiver-host` オプションで、コマンドを実行しているマシン自身のホスト名(またはIPアドレス)を必ず指定して下さい。
+
+コマンドを実行すると、自動的に、クラスタのデータが新しいreplicaノードへと同期され始めます。
+データの同期が完了すると、ノードが自動的に再起動してクラスタに参加します。
+すべてのノードの`catalog.json`も同時に更新され、この時点をもって、新しいノードは晴れてそのクラスタのreplicaノードとして動作し始めます。
+
+これで、ノードがクラスタに参加しました。この事は `system.status` コマンドで確かめられます:
+
+~~~
+$ curl "http://node0:10041/droonga/system/status" | jq "."
+{
+  "nodes": {
+    "node0:10031/droonga": {
+      "live": true
+    },
+    "node1:10031/droonga": {
+      "live": true
+    },
+    "node2:10031/droonga": {
+      "live": true
+    }
+  }
+}
+~~~
+
+新しいノード`node2`がクラスタに参加したため、各ノードの`droonga-http-server`は自動的に、メッセージを`node2`にも分配するようになります。
+
+
+### 書き込みを伴うリクエストの流入を再開する
+
+さて、準備ができました。
+すべてのreplicaは完全に同期した状態となっているので、このクラスタはリクエストを安定して処理できます。
+cronjobを有効化する、クローラの動作を再開する、バッファからのメッセージ送出を再開する、などの操作を行って、クラスタ内のデータを変更するリクエストの流入を再開して下さい。
+
+以上で、Droongaクラスタに新しいreplicaノードを無事参加させる事ができました。
+
+
+## 既存のクラスタからreplicaノードを削除する
+
+Droongaノードは、メモリ不足、ディスク容量不足、ハードウェア障害など、様々な致命的な理由によって動作しなくなり得ます。
+Droongaクラスタ内のノードは互いに監視しあっており、動作しなくなったノードに対してはメッセージの配送を自動的に停止して、動作しないノードがあってもクラスタ全体としては動作し続けるようになっています。
+このような時には、動作していないノードを取り除く必要があります。
+
+もちろん、他の目的に転用したいといった理由から、正常動作中のノードを取り除きたいと考える場合もあるでしょう。
+
+ここでは、`node0` 、 `node1` 、`node2` の3つのreplicaノードからなるDroongaクラスタがあり、最後のノード `node2` をクラスタから離脱させようとしていると仮定します。
+
+### 既存のreplicaをクラスタから分離する
+
+replicaノードを既存のクラスタから削除するには、クラスタ内のいずれかのノードの上で、以下のようにして `droonga-engine-unjoin` コマンドを実行します:
+
+~~~
+(on node0)
+$ droonga-engine-unjoin --host=node2 \
+                        --receiver-host=node0
+Start to unjoin a node node2
+                    by node0 (this host)
+
+Unjoining replica from the cluster...
+...
+Done.
+~~~
+
+ * `--host` オプションで、クラスタから削除するノードのホスト名(またはIPアドレス)を指定して下さい。
+ * `--receiver-host` オプションで、コマンドを実行しているマシン自身のホスト名(またはIPアドレス)を必ず指定して下さい。
+
+すると、ノードがクラスタから自動的に離脱し、すべてのノードの `catalog.json` も同時に更新されます。
+これで、ノードはクラスタから無事離脱しました。
+
+`node2` が本当にクラスタから離脱しているかどうかは `system.status` コマンドで確かめられます:
+
+~~~
+$ curl "http://node0:10041/droonga/system/status" | jq "."
+{
+  "nodes": {
+    "node0:10031/droonga": {
+      "live": true
+    },
+    "node1:10031/droonga": {
+      "live": true
+    }
+  }
+}
+$ curl "http://node1:10041/droonga/system/status" | jq "."
+{
+  "nodes": {
+    "node0:10031/droonga": {
+      "live": true
+    },
+    "node1:10031/droonga": {
+      "live": true
+    }
+  }
+}
+$ curl "http://node2:10041/droonga/system/status" | jq "."
+{
+  "nodes": {
+    "node2:10031/droonga": {
+      "live": true
+    }
+  }
+}
+~~~
+
+ノード`node2`はもはやクラスタの一員ではないため、`node0`と`node1`の`droonga-http-server`は`node2`の`droonga-engine`へはもうメッセージを送りません。
+またその一方で、`node2`の`droonga-http-server`はそのノード上の`droonga-engine`にのみ関連付けられており、他のノードへはメッセージを送りません。
+
+
+
+## クラスタ内の既存のreplicaノードを新しいreplicaノードで置き換える
+
+ノードの置き換えは、上記の手順の組み合わせで行います。
+
+ここでは、`node0` と `node1` の2つのreplicaノードからなるDroongaクラスタがあり、`node1` が不安定で、それを新しいreplicaノード `node2` で置き換えようとしていると仮定します。
+
+### 既存のreplicaをクラスタから分離する
+
+まず、不安定になっているノードを取り除きます。以下のようにしてクラスタからノードを離脱させて下さい:
+
+~~~
+(on node0)
+$ droonga-engine-unjoin --host=node1
+~~~
+
+これで、ノードがクラスタから離脱しました。この事は `system.status` コマンドで確かめられます:
+
+~~~
+$ curl "http://node0:10041/droonga/system/status" | jq "."
+{
+  "nodes": {
+    "node0:10031/droonga": {
+      "live": true
+    }
+  }
+}
+~~~
+
+### 新しいreplicaを追加する
+
+次に、新しいreplica `node2`を用意します。
+必要なパッケージをインストールし、`catalog.json`を生成して、サービスを起動します。
+
+~~~
+(on node2)
+# curl https://raw.githubusercontent.com/droonga/droonga-engine/master/install.sh | \
+    HOST=node2 bash
+# curl https://raw.githubusercontent.com/droonga/droonga-http-server/master/install.sh | \
+    ENGINE_HOST=node2 HOST=node2 bash
+~~~
+
+そのコンピュータがかつてDroongaノードの一員だったことがある場合は、インストール作業の代わりに、古いデータを消去する必要があります:
+
+~~~
+(on node2)
+# droonga-engine-configure --quiet \
+                           --clear --reset-config --reset-catalog \
+                           --host=node2
+# droonga-http-server-configure --quiet --reset-config \
+                                --droonga-engine-host-name=node2 \
+                                --receive-host-name=node2
+~~~
+
+そうしたら、そのノードをクラスタに参加させましょう。
+
+~~~
+(on node2)
+$ droonga-engine-join --host=node2 \
+                      --replica-source-host=node0
+~~~
+
+最終的に、`node0` と `node2` の2つのノードからなるDroongaクラスタができあがりました。
+
+この事は、`system.status` コマンドの結果を見ると確認できます:
+
+~~~
+$ curl "http://node0:10041/droonga/system/status" | jq "."
+{
+  "nodes": {
+    "node0:10031/droonga": {
+      "live": true
+    },
+    "node2:10031/droonga": {
+      "live": true
+    }
+  }
+}
+$ curl "http://node2:10041/droonga/system/status" | jq "."
+{
+  "nodes": {
+    "node0:10031/droonga": {
+      "live": true
+    },
+    "node2:10031/droonga": {
+      "live": true
+    }
+  }
+}
+~~~
+
+## まとめ
+
+このチュートリアルでは、既存の[Droonga][]クラスタに新しいreplicaノードを追加する方法を学びました。
+また、既存のreplicaを取り除く方法と、既存のreplicaを新しいreplicaで置き換える方法も学びました。
+
+  [Ubuntu]: http://www.ubuntu.com/
+  [Droonga]: https://droonga.org/
+  [Groonga]: http://groonga.org/
+  [command reference]: ../../reference/commands/

  Added: ja/tutorial/1.1.0/basic/index.md (+1126 -0) 100644
===================================================================
--- /dev/null
+++ ja/tutorial/1.1.0/basic/index.md    2014-11-30 23:20:40 +0900 (1d69266)
@@ -0,0 +1,1126 @@
+---
+title: "Droonga チュートリアル: 低レイヤのコマンドの基本的な使い方"
+layout: ja
+---
+
+{% comment %}
+##############################################
+  THIS FILE IS AUTOMATICALLY GENERATED FROM
+  "_po/ja/tutorial/1.1.0/basic/index.po"
+  DO NOT EDIT THIS FILE MANUALLY!
+##############################################
+{% endcomment %}
+
+
+* TOC
+{:toc}
+
+## チュートリアルのゴール
+
+Droonga の低レイヤのコマンドを用いて、Droonga を使った検索システムを自分で構築できるようになる。
+
+## 前提条件
+
+* [Ubuntu][] または [CentOS][] の Server を自分でセットアップしたり、基本的な操作ができること
+* [Ruby][] と [Node.js][] の基本的な知識があること
+
+## 概要
+
+### Droonga とは
+
+分散データ処理エンジンです。 "distributed-groonga" に由来します。
+
+Droonga は複数のコンポーネントから構成されています。ユーザは、これらのパッケージを組み合わせて利用することで、全文検索をはじめとするスケーラブルな分散データ処理システムを構築することができます。
+
+### Droonga を構成するコンポーネント
+
+#### Droonga Engine
+
+Droonga Engine は Droonga における分散データ処理の要となるコンポーネントです。リクエストに基いて実際のデータ処理を行います。
+
+このコンポーネントは[droonga-engine][]という名前で開発およびリリースされています。
+通信に使用するプロトコルは[Fluentd]と互換性があります。
+
+[droonga-engine][] は検索エンジンとして、オープンソースのカラムストア機能付き全文検索エンジン [Groonga][] を使用しています。
+
+#### Protocol Adapter
+
+Protocol Adapter は、Droonga を様々なプロトコルで利用できるようにするためのコンポーネントです。
+
+Droonga Engine自体は通信プロトコルとしてfluentdプロトコルにのみ対応しています。
+その代わりに、Protocol AdapterがDroonga Engineとクライアントの間に立って、fluentdプロトコルと他の一般的なプロトコル(HTTP、Socket.IOなど)とを翻訳することになります。
+
+現在の所、HTTP用の実装として、[Node.js][]用モジュールパッケージの[droonga-http-server][]が存在しています。
+言い直すと、droonga-http-serverはDroonga Protocol Adapterの一実装で、言わば「Droonga HTTP Protocol Adapter」であるという事です。
+
+## チュートリアルでつくるシステムの全体像
+
+チュートリアルでは、以下の様な構成のシステムを構築します。
+
+    +-------------+              +------------------+             +----------------+
+    | Web Browser |  <-------->  | Protocol Adapter |  <------->  | Droonga Engine |
+    +-------------+   HTTP       +------------------+   Fluent    +----------------+
+                                 w/droonga-http        protocol   w/droonga-engine
+                                           -server
+
+
+                                 \--------------------------------------------------/
+                                       この部分を構築します
+
+ユーザは Protocol Adapter に、Web ブラウザなどを用いて接続します。Protocol Adapter は Droonga Engine へリクエストを送信します。実際の検索処理は Droonga Engine が行います。検索結果は、Droonga Engine から Protocol Adapter に渡され、最終的にユーザに返ります。
+
+例として、[ニューヨークにあるスターバックスの店舗](http://geocommons.com/overlays/430038)を検索できるデータベースシステムを作成することにします。
+
+
+## 実験用のマシンを用意する
+
+まずコンピュータを調達しましょう。このチュートリアルでは、既存のコンピュータにDroongaによる検索システムを構築する手順を解説します。
+以降の説明は基本的に、[DigitalOcean](https://www.digitalocean.com/)で `Ubuntu 14.04 x64`、`CentOS 6.5 x64`、 または `CentOS 7 x64` の仮想マシンのセットアップを完了し、コンソールにアクセスできる状態になった後を前提として進めます。
+
+注意:Droongaが必要とするパッケージをインストールする前に、マシンが2GB以上のメモリを備えていることを確認して下さい。メモリが不足していると、パッケージのインストール中にネイティブ拡張のビルドに失敗する場合があります。
+
+ホストが `192.168.100.50` だと仮定します。
+
+### Droonga Engineをインストールする
+
+Droonga Engine は、データベースを保持し、実際の検索を担当する部分です。
+このセクションでは、 droonga-engine をインストールし、検索対象となるデータを準備します。
+
+### `droonga-engine`をインストールする
+
+インストールスクリプトをダウンロードし、root権限で`bash`で実行して下さい:
+
+~~~
+# curl https://raw.githubusercontent.com/droonga/droonga-engine/master/install.sh | \
+    bash
+...
+Installing droonga-engine from RubyGems...
+...
+Preparing the user...
+...
+Setting up the configuration directory...
+This node is configured with a hostname XXXXXXXX.
+
+Registering droonga-engine as a service...
+...
+Successfully installed droonga-engine.
+~~~
+
+### `droonga-engine`を起動するための設定ファイルを用意する
+
+すべての設定ファイルと物理的なデータベースは、`droonga-engine`サービス用のユーザのホームディレクトリ内にある`droonga`ディレクトリの下に置かれます:
+
+    $ cd ~droonga-engine/droonga
+
+では、以下の内容で設定ファイル `catalog.json` を上書きしましょう:
+
+catalog.json:
+
+    {
+      "version": 2,
+      "effectiveDate": "2013-09-01T00:00:00Z",
+      "datasets": {
+        "Default": {
+          "nWorkers": 4,
+          "plugins": ["groonga", "crud", "search", "dump", "status"],
+          "schema": {
+            "Store": {
+              "type": "Hash",
+              "keyType": "ShortText",
+              "columns": {
+                "location": {
+                  "type": "Scalar",
+                  "valueType": "WGS84GeoPoint"
+                }
+              }
+            },
+            "Location": {
+              "type": "PatriciaTrie",
+              "keyType": "WGS84GeoPoint",
+              "columns": {
+                "store": {
+                  "type": "Index",
+                  "valueType": "Store",
+                  "indexOptions": {
+                    "sources": ["location"]
+                  }
+                }
+              }
+            },
+            "Term": {
+              "type": "PatriciaTrie",
+              "keyType": "ShortText",
+              "normalizer": "NormalizerAuto",
+              "tokenizer": "TokenBigram",
+              "columns": {
+                "stores__key": {
+                  "type": "Index",
+                  "valueType": "Store",
+                  "indexOptions": {
+                    "position": true,
+                    "sources": ["_key"]
+                  }
+                }
+              }
+            }
+          },
+          "replicas": [
+            {
+              "dimension": "_key",
+              "slicer": "hash",
+              "slices": [
+                {
+                  "volume": {
+                    "address": "192.168.100.50:10031/droonga.000"
+                  }
+                },
+                {
+                  "volume": {
+                    "address": "192.168.100.50:10031/droonga.001"
+                  }
+                },
+                {
+                  "volume": {
+                    "address": "192.168.100.50:10031/droonga.002"
+                  }
+                }
+              ]
+            },
+            {
+              "dimension": "_key",
+              "slicer": "hash",
+              "slices": [
+                {
+                  "volume": {
+                    "address": "192.168.100.50:10031/droonga.010"
+                  }
+                },
+                {
+                  "volume": {
+                    "address": "192.168.100.50:10031/droonga.011"
+                  }
+                },
+                {
+                  "volume": {
+                    "address": "192.168.100.50:10031/droonga.012"
+                  }
+                }
+              ]
+            }
+          ]
+        }
+      }
+    }
+
+この`catalog.json`では、データセット`Starbucks`を以下のように定義しています:
+
+ * 最上位には1つのボリュームがあり、このボリュームには「レプリカ」と名付けられた2つのサブボリュームが含まれる。
+ * 1段階下がった次のレベルには、1つのレプリカ・ボリュームごとに「スライス」と名付けられた3つのサブボリュームが含まれる。
+   これらはDroongaのデータセットの最小の構成要素である。
+
+これらの6つの、`"address"`の情報を持つ最小単位のボリュームは、内部的に*シングル・ボリューム*と呼ばれます。
+`"address"`の情報は、対応する物理的なストレージであるGroongaのデータベースの位置を示していて、それらのデータベースは`droonga-engine`によって自動的に作成されます。
+
+`catalog.json` の詳細については [catalog.json](/ja/reference/catalog) を参照してください。
+
+### `droonga-engine`サービスの起動と終了
+
+`droonga-engine`サービスは`service`コマンドを使って起動できます:
+
+~~~
+# service droonga-engine start
+~~~
+
+終了する場合も、`service`コマンドを使います:
+
+~~~
+# service droonga-engine stop
+~~~
+
+確認できたら、再び`droonga-engine`を起動します。
+
+~~~
+# service droonga-engine start
+~~~
+
+### データベースを作成する
+
+Dronga Engine が起動したので、データを投入しましょう。
+店舗のデータ `stores.jsons` を用意します。
+
+stores.jsons:
+
+~~~
+{
+  "dataset": "Default",
+  "type": "add",
+  "body": {
+    "table": "Store",
+    "key": "1st Avenue & 75th St. - New York NY  (W)",
+    "values": {
+      "location": "40.770262,-73.954798"
+    }
+  }
+}
+{
+  "dataset": "Default",
+  "type": "add",
+  "body": {
+    "table": "Store",
+    "key": "76th & Second - New York NY  (W)",
+    "values": {
+      "location": "40.771056,-73.956757"
+    }
+  }
+}
+{
+  "dataset": "Default",
+  "type": "add",
+  "body": {
+    "table": "Store",
+    "key": "2nd Ave. & 9th Street - New York NY",
+    "values": {
+      "location": "40.729445,-73.987471"
+    }
+  }
+}
+{
+  "dataset": "Default",
+  "type": "add",
+  "body": {
+    "table": "Store",
+    "key": "15th & Third - New York NY  (W)",
+    "values": {
+      "location": "40.733946,-73.9867"
+    }
+  }
+}
+{
+  "dataset": "Default",
+  "type": "add",
+  "body": {
+    "table": "Store",
+    "key": "41st and Broadway - New York NY  (W)",
+    "values": {
+      "location": "40.755111,-73.986225"
+    }
+  }
+}
+{
+  "dataset": "Default",
+  "type": "add",
+  "body": {
+    "table": "Store",
+    "key": "84th & Third Ave - New York NY  (W)",
+    "values": {
+      "location": "40.777485,-73.954979"
+    }
+  }
+}
+{
+  "dataset": "Default",
+  "type": "add",
+  "body": {
+    "table": "Store",
+    "key": "150 E. 42nd Street - New York NY  (W)",
+    "values": {
+      "location": "40.750784,-73.975582"
+    }
+  }
+}
+{
+  "dataset": "Default",
+  "type": "add",
+  "body": {
+    "table": "Store",
+    "key": "West 43rd and Broadway - New York NY  (W)",
+    "values": {
+      "location": "40.756197,-73.985624"
+    }
+  }
+}
+{
+  "dataset": "Default",
+  "type": "add",
+  "body": {
+    "table": "Store",
+    "key": "Macy's 35th Street Balcony - New York NY",
+    "values": {
+      "location": "40.750703,-73.989787"
+    }
+  }
+}
+{
+  "dataset": "Default",
+  "type": "add",
+  "body": {
+    "table": "Store",
+    "key": "Macy's 6th Floor - Herald Square - New York NY  (W)",
+    "values": {
+      "location": "40.750703,-73.989787"
+    }
+  }
+}
+{
+  "dataset": "Default",
+  "type": "add",
+  "body": {
+    "table": "Store",
+    "key": "Herald Square- Macy's - New York NY",
+    "values": {
+      "location": "40.750703,-73.989787"
+    }
+  }
+}
+{
+  "dataset": "Default",
+  "type": "add",
+  "body": {
+    "table": "Store",
+    "key": "Macy's 5th Floor - Herald Square - New York NY  (W)",
+    "values": {
+      "location": "40.750703,-73.989787"
+    }
+  }
+}
+{
+  "dataset": "Default",
+  "type": "add",
+  "body": {
+    "table": "Store",
+    "key": "80th & York - New York NY  (W)",
+    "values": {
+      "location": "40.772204,-73.949862"
+    }
+  }
+}
+{
+  "dataset": "Default",
+  "type": "add",
+  "body": {
+    "table": "Store",
+    "key": "Columbus @ 67th - New York NY  (W)",
+    "values": {
+      "location": "40.774009,-73.981472"
+    }
+  }
+}
+{
+  "dataset": "Default",
+  "type": "add",
+  "body": {
+    "table": "Store",
+    "key": "45th & Broadway - New York NY  (W)",
+    "values": {
+      "location": "40.75766,-73.985719"
+    }
+  }
+}
+{
+  "dataset": "Default",
+  "type": "add",
+  "body": {
+    "table": "Store",
+    "key": "Marriott Marquis - Lobby - New York NY",
+    "values": {
+      "location": "40.759123,-73.984927"
+    }
+  }
+}
+{
+  "dataset": "Default",
+  "type": "add",
+  "body": {
+    "table": "Store",
+    "key": "Second @ 81st - New York NY  (W)",
+    "values": {
+      "location": "40.77466,-73.954447"
+    }
+  }
+}
+{
+  "dataset": "Default",
+  "type": "add",
+  "body": {
+    "table": "Store",
+    "key": "52nd & Seventh - New York NY  (W)",
+    "values": {
+      "location": "40.761829,-73.981141"
+    }
+  }
+}
+{
+  "dataset": "Default",
+  "type": "add",
+  "body": {
+    "table": "Store",
+    "key": "1585 Broadway (47th) - New York NY  (W)",
+    "values": {
+      "location": "40.759806,-73.985066"
+    }
+  }
+}
+{
+  "dataset": "Default",
+  "type": "add",
+  "body": {
+    "table": "Store",
+    "key": "85th & First - New York NY  (W)",
+    "values": {
+      "location": "40.776101,-73.949971"
+    }
+  }
+}
+{
+  "dataset": "Default",
+  "type": "add",
+  "body": {
+    "table": "Store",
+    "key": "92nd & 3rd - New York NY  (W)",
+    "values": {
+      "location": "40.782606,-73.951235"
+    }
+  }
+}
+{
+  "dataset": "Default",
+  "type": "add",
+  "body": {
+    "table": "Store",
+    "key": "165 Broadway - 1 Liberty - New York NY  (W)",
+    "values": {
+      "location": "40.709727,-74.011395"
+    }
+  }
+}
+{
+  "dataset": "Default",
+  "type": "add",
+  "body": {
+    "table": "Store",
+    "key": "1656 Broadway - New York NY  (W)",
+    "values": {
+      "location": "40.762434,-73.983364"
+    }
+  }
+}
+{
+  "dataset": "Default",
+  "type": "add",
+  "body": {
+    "table": "Store",
+    "key": "54th & Broadway - New York NY  (W)",
+    "values": {
+      "location": "40.764275,-73.982361"
+    }
+  }
+}
+{
+  "dataset": "Default",
+  "type": "add",
+  "body": {
+    "table": "Store",
+    "key": "Limited Brands-NYC - New York NY",
+    "values": {
+      "location": "40.765219,-73.982025"
+    }
+  }
+}
+{
+  "dataset": "Default",
+  "type": "add",
+  "body": {
+    "table": "Store",
+    "key": "19th & 8th - New York NY  (W)",
+    "values": {
+      "location": "40.743218,-74.000605"
+    }
+  }
+}
+{
+  "dataset": "Default",
+  "type": "add",
+  "body": {
+    "table": "Store",
+    "key": "60th & Broadway-II - New York NY  (W)",
+    "values": {
+      "location": "40.769196,-73.982576"
+    }
+  }
+}
+{
+  "dataset": "Default",
+  "type": "add",
+  "body": {
+    "table": "Store",
+    "key": "63rd & Broadway - New York NY  (W)",
+    "values": {
+      "location": "40.771376,-73.982709"
+    }
+  }
+}
+{
+  "dataset": "Default",
+  "type": "add",
+  "body": {
+    "table": "Store",
+    "key": "195 Broadway - New York NY  (W)",
+    "values": {
+      "location": "40.710703,-74.009485"
+    }
+  }
+}
+{
+  "dataset": "Default",
+  "type": "add",
+  "body": {
+    "table": "Store",
+    "key": "2 Broadway - New York NY  (W)",
+    "values": {
+      "location": "40.704538,-74.01324"
+    }
+  }
+}
+{
+  "dataset": "Default",
+  "type": "add",
+  "body": {
+    "table": "Store",
+    "key": "2 Columbus Ave. - New York NY  (W)",
+    "values": {
+      "location": "40.769262,-73.984764"
+    }
+  }
+}
+{
+  "dataset": "Default",
+  "type": "add",
+  "body": {
+    "table": "Store",
+    "key": "NY Plaza - New York NY  (W)",
+    "values": {
+      "location": "40.702802,-74.012784"
+    }
+  }
+}
+{
+  "dataset": "Default",
+  "type": "add",
+  "body": {
+    "table": "Store",
+    "key": "36th and Madison - New York NY  (W)",
+    "values": {
+      "location": "40.748917,-73.982683"
+    }
+  }
+}
+{
+  "dataset": "Default",
+  "type": "add",
+  "body": {
+    "table": "Store",
+    "key": "125th St. btwn Adam Clayton & FDB - New York NY",
+    "values": {
+      "location": "40.808952,-73.948229"
+    }
+  }
+}
+{
+  "dataset": "Default",
+  "type": "add",
+  "body": {
+    "table": "Store",
+    "key": "70th & Broadway - New York NY  (W)",
+    "values": {
+      "location": "40.777463,-73.982237"
+    }
+  }
+}
+{
+  "dataset": "Default",
+  "type": "add",
+  "body": {
+    "table": "Store",
+    "key": "2138 Broadway - New York NY  (W)",
+    "values": {
+      "location": "40.781078,-73.981167"
+    }
+  }
+}
+{
+  "dataset": "Default",
+  "type": "add",
+  "body": {
+    "table": "Store",
+    "key": "118th & Frederick Douglas Blvd. - New York NY  (W)",
+    "values": {
+      "location": "40.806176,-73.954109"
+    }
+  }
+}
+{
+  "dataset": "Default",
+  "type": "add",
+  "body": {
+    "table": "Store",
+    "key": "42nd & Second - New York NY  (W)",
+    "values": {
+      "location": "40.750069,-73.973393"
+    }
+  }
+}
+{
+  "dataset": "Default",
+  "type": "add",
+  "body": {
+    "table": "Store",
+    "key": "Broadway @ 81st - New York NY  (W)",
+    "values": {
+      "location": "40.784972,-73.978987"
+    }
+  }
+}
+{
+  "dataset": "Default",
+  "type": "add",
+  "body": {
+    "table": "Store",
+    "key": "Fashion Inst of Technology - New York NY",
+    "values": {
+      "location": "40.746948,-73.994557"
+    }
+  }
+}
+~~~
+
+もう一つターミナルを開いて、jsonをDroonga engineに送信しましょう。
+
+以下のようにして`stores.json`を送信します:
+
+~~~
+$ droonga-request stores.jsons
+Elapsed time: 0.01101195
+[
+  "droonga.message",
+  1393562553,
+  {
+    "inReplyTo": "1393562553.8918273",
+    "statusCode": 200,
+    "type": "add.result",
+    "body": true
+  }
+]
+...
+Elapsed time: 0.004817463
+[
+  "droonga.message",
+  1393562554,
+  {
+    "inReplyTo": "1393562554.2447524",
+    "statusCode": 200,
+    "type": "add.result",
+    "body": true
+  }
+]
+~~~
+
+Droonga engineを用いてスターバックスの店舗データベースを検索する準備ができました。
+
+### droonga-requestでリクエストを送る
+
+動作を確認してみましょう。クエリを以下のようなJSONファイルとして作成します。
+
+search-all-stores.json:
+
+~~~
+{
+  "dataset": "Default",
+  "type": "search",
+  "body": {
+    "queries": {
+      "stores": {
+        "source": "Store",
+        "output": {
+          "elements": [
+            "startTime",
+            "elapsedTime",
+            "count",
+            "attributes",
+            "records"
+          ],
+          "attributes": ["_key"],
+          "limit": -1
+        }
+      }
+    }
+  }
+}
+~~~
+
+Droonga Engine にリクエストを送信します:
+
+~~~
+$ droonga-request search-all-stores.json
+Elapsed time: 0.008286785
+[
+  "droonga.message",
+  1393562604,
+  {
+    "inReplyTo": "1393562604.4970381",
+    "statusCode": 200,
+    "type": "search.result",
+    "body": {
+      "stores": {
+        "count": 40,
+        "records": [
+          [
+            "15th & Third - New York NY  (W)"
+          ],
+          [
+            "41st and Broadway - New York NY  (W)"
+          ],
+          [
+            "84th & Third Ave - New York NY  (W)"
+          ],
+          [
+            "Macy's 35th Street Balcony - New York NY"
+          ],
+          [
+            "Second @ 81st - New York NY  (W)"
+          ],
+          [
+            "52nd & Seventh - New York NY  (W)"
+          ],
+          [
+            "1585 Broadway (47th) - New York NY  (W)"
+          ],
+          [
+            "54th & Broadway - New York NY  (W)"
+          ],
+          [
+            "60th & Broadway-II - New York NY  (W)"
+          ],
+          [
+            "63rd & Broadway - New York NY  (W)"
+          ],
+          [
+            "2 Columbus Ave. - New York NY  (W)"
+          ],
+          [
+            "NY Plaza - New York NY  (W)"
+          ],
+          [
+            "2138 Broadway - New York NY  (W)"
+          ],
+          [
+            "Broadway @ 81st - New York NY  (W)"
+          ],
+          [
+            "76th & Second - New York NY  (W)"
+          ],
+          [
+            "2nd Ave. & 9th Street - New York NY"
+          ],
+          [
+            "150 E. 42nd Street - New York NY  (W)"
+          ],
+          [
+            "Macy's 6th Floor - Herald Square - New York NY  (W)"
+          ],
+          [
+            "Herald Square- Macy's - New York NY"
+          ],
+          [
+            "Macy's 5th Floor - Herald Square - New York NY  (W)"
+          ],
+          [
+            "Marriott Marquis - Lobby - New York NY"
+          ],
+          [
+            "85th & First - New York NY  (W)"
+          ],
+          [
+            "1656 Broadway - New York NY  (W)"
+          ],
+          [
+            "Limited Brands-NYC - New York NY"
+          ],
+          [
+            "2 Broadway - New York NY  (W)"
+          ],
+          [
+            "36th and Madison - New York NY  (W)"
+          ],
+          [
+            "125th St. btwn Adam Clayton & FDB - New York NY"
+          ],
+          [
+            "118th & Frederick Douglas Blvd. - New York NY  (W)"
+          ],
+          [
+            "Fashion Inst of Technology - New York NY"
+          ],
+          [
+            "1st Avenue & 75th St. - New York NY  (W)"
+          ],
+          [
+            "West 43rd and Broadway - New York NY  (W)"
+          ],
+          [
+            "80th & York - New York NY  (W)"
+          ],
+          [
+            "Columbus @ 67th - New York NY  (W)"
+          ],
+          [
+            "45th & Broadway - New York NY  (W)"
+          ],
+          [
+            "92nd & 3rd - New York NY  (W)"
+          ],
+          [
+            "165 Broadway - 1 Liberty - New York NY  (W)"
+          ],
+          [
+            "19th & 8th - New York NY  (W)"
+          ],
+          [
+            "195 Broadway - New York NY  (W)"
+          ],
+          [
+            "70th & Broadway - New York NY  (W)"
+          ],
+          [
+            "42nd & Second - New York NY  (W)"
+          ]
+        ]
+      }
+    }
+  }
+]
+~~~
+
+店舗の名前が取得できました。エンジンは正しく動作しているようです。引き続き Protocol Adapter を構築して、検索リクエストをHTTPで受け付けられるようにしましょう。
+
+## HTTP Protocol Adapter を用意する
+
+HTTP Protocol Adapterとして`droonga-http-server`を使用しましょう。
+
+### droonga-http-serverをインストールする
+
+インストールスクリプトをダウンロードし、root権限で`bash`で実行して下さい:
+
+~~~
+# curl https://raw.githubusercontent.com/droonga/droonga-http-server/master/install.sh | \
+    bash
+...
+Installing droonga-http-server from npmjs.org...
+...
+Preparing the user...
+...
+Setting up the configuration directory...
+The droonga-engine service is detected on this node.
+The droonga-http-server is configured to be connected
+to this node (XXXXXXXX).
+This node is configured with a hostname XXXXXXXX.
+
+Registering droonga-http-server as a service...
+...
+Successfully installed droonga-http-server.
+~~~
+
+### `droonga-engine`サービスの起動と終了
+
+`droonga-http-server`サービスは`service`コマンドを使って起動できます:
+
+~~~
+# service droonga-http-server start
+~~~
+
+終了する場合も、`service`コマンドを使います:
+
+~~~
+# service droonga-http-server stop
+~~~
+
+確認できたら、再び`droonga-http-server`を起動します。
+
+~~~
+# service droonga-engine start
+~~~
+
+### HTTPでの検索リクエスト
+
+準備が整いました。 Protocol Adapter に向けて HTTP 経由でリクエストを発行し、データベースに問い合わせを行ってみましょう。まずは `Shops` テーブルの中身を取得してみます。以下のようなリクエストを用います。(`attributes=_key` を指定しているのは「検索結果に `_key` 値を含めて返してほしい」という意味です。これがないと、`records` に何も値がないレコードが返ってきてしまいます。`attributes` パラメータには `,` 区切りで複数の属性を指定することができます。`attributes=_key,location` と指定することで、緯度経度もレスポンスとして受け取ることができます)
+
+    $ curl "http://192.168.100.50:10041/tables/Store?attributes=_key&limit=-1"
+    {
+      "stores": {
+        "count": 40,
+        "records": [
+          [
+            "15th & Third - New York NY  (W)"
+          ],
+          [
+            "41st and Broadway - New York NY  (W)"
+          ],
+          [
+            "84th & Third Ave - New York NY  (W)"
+          ],
+          [
+            "Macy's 35th Street Balcony - New York NY"
+          ],
+          [
+            "Second @ 81st - New York NY  (W)"
+          ],
+          [
+            "52nd & Seventh - New York NY  (W)"
+          ],
+          [
+            "1585 Broadway (47th) - New York NY  (W)"
+          ],
+          [
+            "54th & Broadway - New York NY  (W)"
+          ],
+          [
+            "60th & Broadway-II - New York NY  (W)"
+          ],
+          [
+            "63rd & Broadway - New York NY  (W)"
+          ],
+          [
+            "2 Columbus Ave. - New York NY  (W)"
+          ],
+          [
+            "NY Plaza - New York NY  (W)"
+          ],
+          [
+            "2138 Broadway - New York NY  (W)"
+          ],
+          [
+            "Broadway @ 81st - New York NY  (W)"
+          ],
+          [
+            "76th & Second - New York NY  (W)"
+          ],
+          [
+            "2nd Ave. & 9th Street - New York NY"
+          ],
+          [
+            "150 E. 42nd Street - New York NY  (W)"
+          ],
+          [
+            "Macy's 6th Floor - Herald Square - New York NY  (W)"
+          ],
+          [
+            "Herald Square- Macy's - New York NY"
+          ],
+          [
+            "Macy's 5th Floor - Herald Square - New York NY  (W)"
+          ],
+          [
+            "Marriott Marquis - Lobby - New York NY"
+          ],
+          [
+            "85th & First - New York NY  (W)"
+          ],
+          [
+            "1656 Broadway - New York NY  (W)"
+          ],
+          [
+            "Limited Brands-NYC - New York NY"
+          ],
+          [
+            "2 Broadway - New York NY  (W)"
+          ],
+          [
+            "36th and Madison - New York NY  (W)"
+          ],
+          [
+            "125th St. btwn Adam Clayton & FDB - New York NY"
+          ],
+          [
+            "118th & Frederick Douglas Blvd. - New York NY  (W)"
+          ],
+          [
+            "Fashion Inst of Technology - New York NY"
+          ],
+          [
+            "1st Avenue & 75th St. - New York NY  (W)"
+          ],
+          [
+            "West 43rd and Broadway - New York NY  (W)"
+          ],
+          [
+            "80th & York - New York NY  (W)"
+          ],
+          [
+            "Columbus @ 67th - New York NY  (W)"
+          ],
+          [
+            "45th & Broadway - New York NY  (W)"
+          ],
+          [
+            "92nd & 3rd - New York NY  (W)"
+          ],
+          [
+            "165 Broadway - 1 Liberty - New York NY  (W)"
+          ],
+          [
+            "19th & 8th - New York NY  (W)"
+          ],
+          [
+            "195 Broadway - New York NY  (W)"
+          ],
+          [
+            "70th & Broadway - New York NY  (W)"
+          ],
+          [
+            "42nd & Second - New York NY  (W)"
+          ]
+        ]
+      }
+    }
+
+`count` の値からデータが全部で 36 件あることがわかります。`records` に配列として検索結果が入っています。
+
+もう少し複雑なクエリを試してみましょう。例えば、店名に「Columbus」を含む店舗を検索します。`query` パラメータにクエリ `Columbus` を、`match_to` パラメータに検索対象として `_key` を指定し、以下のようなリクエストを発行します。
+
+    $ curl "http://192.168.100.50:10041/tables/Store?query=Columbus&match_to=_key&attributes=_key&limit=-1"
+    {
+      "stores": {
+        "count": 2,
+        "records": [
+          [
+            "Columbus @ 67th - New York NY  (W)"
+          ],
+          [
+            "2 Columbus Ave. - New York NY  (W)"
+          ]
+        ]
+      }
+    }
+
+以上 2 件が検索結果として該当することがわかりました。
+
+Droonga HTTP Serverの詳細については[リファレンスマニュアル][http-server]を参照して下さい。
+
+
+## まとめ
+
+[Ubuntu Linux][Ubuntu] または [CentOS][] 上に [Droonga][] を構成するパッケージである [droonga-engine][] と [droonga-http-server][] をセットアップしました。
+これらのパッケージを利用することで、HTTP Protocol Adapter と Droonga Engine からなるシステムを構築し、実際に検索を行いました。
+
+
+  [http-server]: ../../reference/http-server/
+  [Ubuntu]: http://www.ubuntu.com/
+  [CentOS]: https://www.centos.org/
+  [Droonga]: https://droonga.org/
+  [droonga-engine]: https://github.com/droonga/droonga-engine
+  [droonga-http-server]: https://github.com/droonga/droonga-http-server
+  [Groonga]: http://groonga.org/
+  [Ruby]: http://www.ruby-lang.org/
+  [nvm]: https://github.com/creationix/nvm
+  [Socket.IO]: http://socket.io/
+  [Fluentd]: http://fluentd.org/
+  [Node.js]: http://nodejs.org/

  Added: ja/tutorial/1.1.0/benchmark/index.md (+811 -0) 100644
===================================================================
--- /dev/null
+++ ja/tutorial/1.1.0/benchmark/index.md    2014-11-30 23:20:40 +0900 (12a039e)
@@ -0,0 +1,811 @@
+---
+title: "DroongaとGroongaのベンチマークの取り方"
+layout: ja
+---
+
+{% comment %}
+##############################################
+  THIS FILE IS AUTOMATICALLY GENERATED FROM
+  "_po/ja/tutorial/1.1.0/benchmark/index.po"
+  DO NOT EDIT THIS FILE MANUALLY!
+##############################################
+{% endcomment %}
+
+
+* TOC
+{:toc}
+
+<!--
+this is based on https://github.com/droonga/presentation-droonga-meetup-1-introduction/blob/master/benchmark/README.md
+-->
+
+## チュートリアルのゴール
+
+[Droonga][]クラスタのベンチマークを測定し、[Groonga][groonga]での結果と比較するまでの、一連の手順を学ぶこと。
+
+## 前提条件
+
+* [Ubuntu][]または[CentOS][]のサーバの操作に関する基本的な知識と経験があること。
+* [Groonga][groonga]をHTTP経由で操作する際の基本的な知識と経験があること。
+* [Droonga][]クラスタの構築手順について基本的な知識があること。
+  このチュートリアルの前に、[「使ってみる」のチュートリアル](../groonga/)を完了しておいて下さい。
+
+## ベンチマークの必要性について
+
+DroongaはGroongaと互換性があるため、GroongaベースのアプリケーションをDroongaに移行することを検討することもあるでしょう。
+そんな時は、実際に移行する前に、Droongaの性能を測定して、より良い移行先であるかどうかを確認しておくべきです。
+
+もちろん、単にGroongaとDroongaの性能差を知りたいと思うこともあるでしょう。
+ベンチマークによって、差を可視化することができます。
+
+
+### 性能の可視化の方法
+
+あるシステムの性能を表す指標としては、以下の2つが多く使われます。
+
+ * レイテンシー
+ * スループット
+
+レイテンシーとは、システムがリクエストを受け取ってからレスポンスを返すまでに実際にかかった応答時間のことです。
+言い換えると、これは各リクエストについてクライアントが待たされた時間です。
+この指標においては、数値は小さければ小さいほどよいです。
+一般的に、クエリが軽い場合や、データベースのサイズが小さい場合、クライアント数が少ない場合に、レイテンシーは小さくなります。
+
+スループットは、一度にどれだけの数のリクエストを捌けるかを意味するものです。
+性能の指標は「*クエリ毎秒*(Queries Per Second, *qps*)」という単位で表されます。
+例えば、あるGroongaサーバが1秒に10件のリクエストを処理できたとき、これを「10qps」と表現します。
+10人のユーザ(クライアント)がいるのかもしれませんし、2人のユーザがそれぞれブラウザ上で5つのタブを開いているのかもしれません。
+ともかく、「10qps」という数値は、1秒が経過する間にそのGroongaサーバが実際に10件のリクエストを受け付けて、レスポンスを返したということを意味します。
+
+ベンチマークは、[drnbench]()というGemパッケージによって導入される`drnbench-request-response`コマンドで行うことができます。
+このツールは、計測対象のサービスについてレイテンシーとスループットの両方を計測できます。
+
+
+### ベンチマークツールはどのように性能を測定するのか
+
+`drnbench-request-response`は、対象サービスの性能を以下のようにして計測します:
+
+ 1. マスタープロセスが仮想クライアントを1つ生成する。
+    このクライアントは即座に動き始め、対象サービスに対して多数のリクエストを連続して頻繁に送り続ける。
+ 2. しばらくしたら、マスタープロセスがクライアントを終了させる。
+    そして、応答のデータから最小・最大・平均の経過時間を計算する。
+    また、実際に対象サービスによって処理されたリクエストの件数を集計し、結果を1クライアントの場合のqps値として報告する。
+ 3. マスタープロセスが仮想クライアントを2つ生成する。
+    これらのクライアントはリクエストを送り始める。
+ 4. しばらくしたら、マスタープロセスがすべてのクライアントを終了させる。
+    そして、最小・最大・平均の経過時間を計算すると同時に、実際に対象サービスに処理されたリクエストの件数を集計し、結果を2クライアントの場合のqps値として報告する。
+ 5. 3クライアントの場合、4クライアントの場合……と、クライアント数を増やしながら繰り返す。
+ 6. 最後に、マスタープロセスが最小・最大・平均の経過時間、qps値、およびその他の情報をまとめたものを、以下のようなCSVファイルとして保存する:
+    
+    ~~~
+    n_clients,total_n_requests,queries_per_second,min_elapsed_time,max_elapsed_time,average_elapsed_time,200
+    1,996,33.2,0.001773766,0.238031643,0.019765581680722916,100.0
+    2,1973,65.76666666666667,0.001558398,0.272225481,0.020047345673086702,100.0
+    4,3559,118.63333333333334,0.001531184,0.39942581,0.023357554419499882,100.0
+    6,4540,151.33333333333334,0.001540704,0.501663069,0.042344890696916264,100.0
+    8,4247,141.56666666666666,0.001483995,0.577100609,0.045836844514480835,100.0
+    10,4466,148.86666666666667,0.001987089,0.604507078,0.06949704923846833,100.0
+    12,4500,150.0,0.001782343,0.612596799,0.06902839555222215,100.0
+    14,4183,139.43333333333334,0.001980711,0.60754769,0.1033681068718623,100.0
+    16,4519,150.63333333333333,0.00284654,0.653204575,0.09473386513387955,100.0
+    18,4362,145.4,0.002330049,0.640683693,0.12581190483929405,100.0
+    20,4228,140.93333333333334,0.003710795,0.662666076,0.1301649290901133,100.0
+    ~~~
+    
+    この結果は、分析や、グラフ描画など、様々な使い方ができます。
+    
+    (注意: 性能測定の結果は様々な要因によって変動します。
+    これはあくまで特定のバージョン、特定の環境での結果の例です。)
+
+### 結果の読み方と分析の仕方 {#how-to-analyze}
+
+上の例を見て下さい。
+
+#### HTTPレスポンスのステータス
+
+最後の列、`200`を見て下さい。
+これはHTTPレスポンスのステータスの割合を示しています。
+`200`は「OK」、`0`は「タイムアウト」です。
+`400`や`500`などのエラーレスポンスが得られた場合も、同様に報告されます。
+これらの情報は、意図しない速度低下の原因究明に役立つでしょう。
+
+#### レイテンシー
+
+レイテンシーは簡単に分析できます。値が小さければ小さいほどよいと言えます。
+対象サービスのキャッシュ機構が正常に動作している場合、最小と平均の応答時間は小さくなります。
+最大応答時間は、重たいクエリ、システムのメモリのスワップの発生、意図しないエラーの発生などの影響を受けます。
+
+レイテンシーのグラフは、有用な同時接続数の上限も明らかにします。
+
+![レイテンシーのグラフ](/images/tutorial/benchmark/latency-groonga-1.0.8.png)
+
+これは`average_elapsed_time`のグラフです。
+4クライアントを越えた所で経過時間が増加していることが見て取れるでしょう。
+これは何を意味するのでしょうか?
+
+Groongaは利用可能なプロセッサ数と同じ数だけのリクエストを完全に並行処理できます。
+コンピュータのプロセッサ数が4である場合、そのシステムは4件以下のリクエストについては余計な待ち時間無しで同時に処理することができます。
+それ以上の数のリクエストが来た場合、5番目以降のリクエストは、それ以前に受け付けたリクエストの処理完了後に処理されます。
+先のグラフは、この理論上の上限が事実であることを示しています。
+
+#### スループット
+
+スループット性能の分析にも、グラフが便利です。
+
+![スループットのグラフ](/images/tutorial/benchmark/throughput-groonga-1.0.8.png)
+
+6クライアントを超えたあたりで、qps値が150前後で頭打ちになっているのを見て取れるでしょう。
+これは、計測対象のサービスが1秒あたり最大で150件のリクエストを処理できるということを意味しています。
+
+言い直すと、この結果は「(ハードウェア、ソフトウェア、ネットワーク、データベースの大きさ、クエリの内容など、様々な要素をひっくるめた)このシステムのスループットの性能限界は150qpsである」という風に読み取ることができます。
+もしサービスに対するリクエストの件数が増加しつつあり、この限界に近づいているようであれば、クエリの最適化やコンピュータ自体のアップグレードなど、何らかの対策を取ることを検討する必要があると言えます。
+
+#### 性能の比較
+
+同じリクエストのパターンをGroongaとDroongaに送ることで、各システムの性能を比較することができます。
+もしDroongaの方が性能が良ければ、サービスのバックエンドをGroongaからDroongaに移行する根拠になり得ます。
+
+また、異なるノード数での結果を比較すると、新しくノードを追加する際のコストパフォーマンスを分析することもできます。
+
+
+## ベンチマーク環境を用意する
+
+新しいDroongaクラスタのために、以下の、互いにホスト名で名前解決できる4つの[Ubuntu][] 14.04LTSのサーバがあると仮定します:
+
+ * `192.168.100.50`、ホスト名:`node0`
+ * `192.168.100.51`、ホスト名:`node1`
+ * `192.168.100.52`、ホスト名:`node2`
+ * `192.168.100.53`、ホスト名:`node3`
+
+1つはクライアント用で、残りの3つはDroongaノード用です。
+
+### 比較対照のデータベース(およびそのデータソース)を用意する
+
+もしすでにGroongaベースのサービスを運用しているのであれば、それ自体が比較対照となります。
+この場合、Groongaデータベースの内容すべてをダンプ出力し、新しく用意したDroongaクラスタに流し込みさえすれば、性能比較を行えます。
+
+特に運用中のサービスが無いということであれば、有効なベンチマークを取るために大量のデータを格納したデータベースを、対照として用意する必要があります。
+[wikipedia-search][]リポジトリには、[Wikipedia日本語版](http://ja.wikipedia.org/)のページを格納したGroongaサーバ(およびDroongaクラスタ)を用意する手助けとなるスクリプトが含まれています。
+
+では、Wikipediaのページを格納したGroongaデータベースを、`node0`のノードに準備しましょう。
+
+ 1. データベースのサイズを決める。
+    ベンチマーク測定のためには、十分に大きいサイズのデータベースを使う必要があります。
+    
+    * もしデータベースが小さすぎれば、Droongaのオーバーヘッドが相対的に大きくなるため、Droongaにとって過度に悲観的なベンチマーク結果となるでしょう。
+    * もしデータベースが大きすぎれば、メモリのスワップが発生してシステムの性能がランダムに劣化するために、過度に不安定なベンチマーク結果となるでしょう。
+    * 各ノードのメモリの搭載量が異なる場合、その中で最もメモリ搭載量が少ないノードに合わせてデータベースのサイズを決めるのが望ましいです。
+
+    例えば、`node0` (8GB RAM), `node1` (8GB RAM), `node2` (6GB RAM)の3つのノードがあるとすれば、データベースは6GBよりも小さくするべきです。
+ 2. [インストール手順](http://groonga.org/ja/docs/install.html)に従ってGroongaサーバをセットアップする。
+    
+    ~~~
+    (on node0)
+    % sudo apt-get -y install software-properties-common
+    % sudo add-apt-repository -y universe
+    % sudo add-apt-repository -y ppa:groonga/ppa
+    % sudo apt-get update
+    % sudo apt-get -y install groonga
+    ~~~
+    
+    これでGroongaを利用できるようになります。.
+ 3. Rakeのタスク `data:convert:groonga:ja` を使って、Wikipediaのページのアーカイブをダウンロードし、Groongaのダンプファイルに変換する。
+    変換するレコード(ページ)の数は、環境変数 `MAX_N_RECORDS`(初期値は5000)で指定することができます。
+    
+    ~~~
+    (on node0)
+    % cd ~/
+    % git clone https://github.com/droonga/wikipedia-search.git
+    % cd wikipedia-search
+    % bundle install --path vendor/
+    % time (MAX_N_RECORDS=1500000 bundle exec rake data:convert:groonga:ja \
+                                    data/groonga/ja-pages.grn)
+    ~~~
+    
+    アーカイブは非常に大きいため、ダウンロードと変換には時間がかかります。
+    
+    変換が終わったら、`~/wikipedia-search/data/groonga/ja-pages.grn`の位置にダンプファイルが生成されています。
+    新しいデータベースを作成し、ダンプファイルの内容を流し込みましょう。
+    この操作にも時間がかかります:
+    
+    ~~~
+    (on node0)
+    % mkdir -p $HOME/groonga/db/
+    % groonga -n $HOME/groonga/db/db quit
+    % time (cat ~/wikipedia-search/config/groonga/schema.grn | groonga $HOME/groonga/db/db)
+    % time (cat ~/wikipedia-search/config/groonga/indexes.grn | groonga $HOME/groonga/db/db)
+    % time (cat ~/wikipedia-search/data/groonga/ja-pages.grn | groonga $HOME/groonga/db/db)
+    ~~~
+    
+    注意: レコードの数がデータベースのサイズに影響します。
+    参考までに、検証環境での結果を以下に示します:
+    
+     * 30万件のレコードから、1.1GBのデータベースができました。
+       データの変換には17分、流し込みには6分を要しました。
+     * 150万件のレコードから、4.3GBのデータベースができました。
+       データの変換には53分、流し込みには64分を要しました。
+    
+ 4. GroongaをHTTPサーバとして起動する
+    
+    ~~~
+    (on node0)
+    % groonga -p 10041 -d --protocol http $HOME/groonga/db/db
+    ~~~
+
+これで、このノードをベンチマーク測定の対照として使う準備が整いました。
+
+
+### Droongaクラスタをセットアップする
+
+Droongaをすべてのノードにインストールします。
+HTTP経由での動作をベンチマーク測定するので、`droonga-engine`と`droonga-http-server`の両方をインストールする必要があります。
+
+~~~
+(on node0)
+% host=node0
+% curl https://raw.githubusercontent.com/droonga/droonga-engine/master/install.sh | \
+    sudo HOST=$host bash
+% curl https://raw.githubusercontent.com/droonga/droonga-http-server/master/install.sh | \
+    sudo ENGINE_HOST=$host HOST=$host PORT=10042 bash
+% sudo droonga-engine-catalog-generate \
+    --hosts=node0,node1,node2
+% sudo service droonga-engine start
+% sudo service droonga-http-server start
+~~~
+
+~~~
+(on node1)
+% host=node1
+...
+~~~
+
+~~~
+(on node2)
+% host=node2
+...
+~~~
+
+注意: `droonga-http-server`をGroongaとは別のポート番号で起動するために、ここでは`PORT`環境変数を使って上記のようにして`10042`のポートで起動するように指定しています。
+
+DroongaのHTTPサーバが動作しており、`10042`番のポートを監視していることと、3つのノードからなるクラスタとして動作していることを確認しておきましょう:
+
+~~~
+(on node0)
+% sudo apt-get install -y jq
+% curl "http://node0:10042/droonga/system/status" | jq .
+{
+  "nodes": {
+    "node0:10031/droonga": {
+      "live": true
+    },
+    "node1:10031/droonga": {
+      "live": true
+    },
+    "node2:10031/droonga": {
+      "live": true
+    }
+  }
+}
+~~~
+
+
+### GroongaからDroongaへとデータを同期する
+
+次に、Droongaのデータベースを用意します。
+
+`grn2drn`コマンドを使うと、Groongaのダンプ出力をDroonga用のメッセージに変換することができます。
+コマンドを利用できるようにするために、Groongaサーバとなっているコンピュータに`grn2drn` Gemパッケージをインストールしましょう。
+
+~~~
+(on node0)
+% sudo gem install grn2drn
+~~~
+
+また、`rroonga` Gemパッケージの一部として導入される`grndump`コマンドは、既存のGroongaのデータベースからすべてのデータを柔軟に取り出す機能を提供しています。
+もし既存のGroongaサーバからデータを取り出そうとしているのであれば、事前に`rroonga`をインストールしておく必要があります。
+
+~~~
+(on Ubuntu server)
+% sudo apt-get -y install software-properties-common
+% sudo add-apt-repository -y universe
+% sudo add-apt-repository -y ppa:groonga/ppa
+% sudo apt-get update
+% sudo apt-get -y install libgroonga-dev
+% sudo gem install rroonga
+~~~
+
+~~~
+(on CentOS server)
+# rpm -ivh http://packages.groonga.org/centos/groonga-release-1.1.0-1.noarch.rpm
+# yum -y makecache
+# yum -y ruby-devel groonga-devel
+# gem install rroonga
+~~~
+
+それでは、スキーマ定義とデータを別々にダンプ出力し、Droongaクラスタに流し込みましょう。
+
+~~~
+(on node0)
+% time (grndump --no-dump-tables $HOME/groonga/db/db | \
+          grn2drn | \
+          droonga-send --server=node0 \
+                       --report-throughput)
+% time (grndump --no-dump-schema --no-dump-indexes $HOME/groonga/db/db | \
+          grn2drn | \
+          droonga-send --server=node0 \
+                       --server=node1 \
+                       --server=node2 \
+                       --messages-per-second=100 \
+                       --report-throughput)
+~~~
+
+スキーマ定義とインデックスの定義については単一のエンドポイントに送るように注意して下さい。
+Droongaは複数のノードに並行してバラバラに送られたスキーマ変更コマンドをソートすることができないので、スキーマ定義のリクエストを複数のエンドポイントに流し込むと、データベースが壊れてしまいます。
+
+トラフィックとシステムの負荷を軽減するために、1秒あたりに流入するメッセージの量を`--messages-per-second`オプションで制限するようにしてください。
+大量のメッセージが一度にDroongaクラスタに流れ込むと、システムの限界を超えてしまい、Droongaがメモリを食い潰して、システムを非常に低速にしてしまう恐れがあります。
+
+この操作にも時間がかかります。
+例えば `--messages-per-second=100` と指定した場合、150万件のレコードを同期するにはだいたい4時間ほどかかります。(必要な時間は `150000 / 100 / 60 / 60` のような計算式で見積もれます)
+
+以上の手順により、`10041`ポートを監視するGroonga HTTPサーバと、`10042`ポートを監視するDroonga HTTPサーバの、2つのHTTPサーバを用意できます。
+
+
+### クライアントをセットアップする
+
+クライアントにするマシンには、ベンチマーク用のクライアントをインストールする必要があります。
+
+`node3`をクライアントとして使うと仮定します:
+
+~~~
+(on node3)
+% sudo apt-get update
+% sudo apt-get -y upgrade
+% sudo apt-get install -y ruby curl jq
+% sudo gem install drnbench
+~~~
+
+
+## リクエストパターンを用意する
+
+ベンチマーク用のリクエストパターンファイルを用意しましょう。
+
+### キャッシュヒット率を決める
+
+まず、キャッシュヒット率を決める必要があります。
+
+もし既に運用中のGroongaベースのサービスがあるのであれば、以下のようにして、`status`コマンドを使ってGroongaデータベースのキャッシュヒット率を調べることができます:
+
+~~~
+% curl "http://node0:10041/d/status" | jq .
+[
+  [
+    0,
+    1412326645.19701,
+    3.76701354980469e-05
+  ],
+  {
+    "max_command_version": 2,
+    "alloc_count": 158,
+    "starttime": 1412326485,
+    "uptime": 160,
+    "version": "4.0.6",
+    "n_queries": 1000,
+    "cache_hit_rate": 0.5,
+    "command_version": 1,
+    "default_command_version": 1
+  }
+]
+~~~
+
+キャッシュヒット率は`"cache_hit_rate"`として返却されます。
+`0.5`は50%という意味で、レスポンスのうちの半分がキャッシュされた結果に基づいて返されているということです。
+
+運用中のサービスが無いのであれば、ひとまずキャッシュヒット率は50%と過程すると良いでしょう。
+
+GroongaとDroongaの性能を正確に比較するためには、キャッシュヒット率が実際の値に近くなるようにリクエストパターンを用意する必要があります。
+さて、どのようにすればよいのでしょうか?
+
+キャッシュヒット率は、`N = 100 ÷ (キャッシュヒット率)`という式で計算した、ユニーク(一意)なリクエストパターンの数で制御できます。
+これは、GroongaとDroonga(`droonga-http-server`)が既定の状態で最大で100件までの結果をキャッシュするためです。
+期待されるキャッシュヒット率が50%なのであれば、用意するべきユニークなリクエストの数は`N = 100 ÷ 0.5 = 200`と計算できます。
+
+注意: 実際のキャッシュヒット率が0に近い場合、必要となるユニークなリクエストの件数が巨大になってしまいます。
+このような場合は、キャッシュヒット率を`0.01`(1%)程度と見なすとよいでしょう。
+
+
+### リクエストパターンファイルの書式
+
+`drnbench-request-response`用のリクエストパターンのリストは、HTTPリクエストのパスのリストであるプレーンテキスト形式で作成します。
+以下はGroongaの`select`コマンド用のリクエストの一覧の例です:
+
+~~~
+/d/select?command_version=2&table=Pages&limit=10&match_columns=title&output_columns=title&query=AAA
+/d/select?command_version=2&table=Pages&limit=10&match_columns=title&output_columns=title&query=BBB
+...
+~~~
+
+もし既存のGroongaベースのサービスを運用しているのであれば、リクエストパターンのリストは、実際のアクセスログやクエリログなどから生成するのが望ましいです。
+実際のリクエストに近いパターンであるほど、システムの性能をより有効に測定できます。
+ユニークなリクエストパターンを200件作るには、ログからユニークなリクエスト先パスを200件収集してくればOKです。
+
+運用中のサービスが無い場合は、何らかの方法でリクエストパスのリストを作る必要があります。
+詳しくは事項を参照して下さい。
+
+### 検索語句のリストを用意する
+
+200件のユニークなリクエストパターンを作るには、200個の語句を用意する必要があります。
+しかも、それらはすべて実際にGroongaのデータベースで有効な検索結果を返すものでなくてはなりません。
+もしランダムに生成した単語(例えば`P2qyNJ9L`, `Hy4pLKc5`, `D5eftuTp`……といった具合)を使った場合、ほとんどのリクエストに対して「ヒット無し」という検索結果が返されてしまうため、有効なベンチマーク結果を得ることができません。
+
+こんな時のために、`drnbench-extract-searchterms`というユーティリティコマンドがあります。
+これは、以下のようにしてGroongaの検索結果から単語のリストを生成します:
+
+~~~
+% curl "http://node0:10041/d/select?command_version=2&table=Pages&limit=10&output_columns=title" | \
+    drnbench-extract-searchterms
+title1
+title2
+title3
+...
+title10
+~~~
+
+`drnbench-extract-searchterms`は検索結果のレコードの最初の列の値を単語として取り出します。
+200件の有効な検索語句を得るには、単に`limit=200`と指定して検索結果を得ればOKです。
+
+
+### 与えられた語句からリクエストパターンファイルを生成する
+
+では、`drnbench-extract-searchterms`を使って、Groongaの検索結果からリクエストパターンを生成してみましょう。
+
+~~~
+% n_unique_requests=200
+% curl "http://node0:10041/d/select?command_version=2&table=Pages&limit=$n_unique_requests&output_columns=title" | \
+    drnbench-extract-searchterms --escape | \
+    sed -r -e "s;^;/d/select?command_version=2\&table=Pages\&limit=10\&match_columns=title,text\&output_columns=snippet_html(title),snippet_html(text),categories,_key\&query_flags=NONE\&sortby=title\&drilldown=categories\&drilldown_limit=10\&drilldown_output_columns=_id,_key,_nsubrecs\&drilldown_sortby=_nsubrecs\&query=;" \
+    > ./patterns.txt
+~~~
+
+注意:
+
+ * sedスクリプトの中の`&`は、前にバックスラッシュを置いて`\&`のようにエスケープする必要があることに注意して下さい。
+ * `drnbench-extract-searchterms`コマンドには、`--escape`オプションを指定すると良いでしょう。
+   この指定により、URIに含められない文字がエスケープされます。
+ * 得られた検索語句を`query`パラメータに使用する場合、`query_flags=NONE`も同時に指定すると良いでしょう。
+   この指定により、Groongaは`query`パラメータの中に含まれる特殊文字を無視するようになります。
+   この指定を忘れると、不正なクエリのエラーに遭遇することになるかもしれません。
+
+生成されたファイル `patterns.txt` は以下のような内容になります:
+
+~~~
+/d/select?command_version=2&table=Pages&limit=10&match_columns=title,text&output_columns=snippet_html(title),snippet_html(text),categories,_key&query_flags=NONE&sortby=title&drilldown=categories&drilldown_limit=10&drilldown_output_columns=_id,_key,_nsubrecs&drilldown_sortby=_nsubrecs&query=AAA
+/d/select?command_version=2&table=Pages&limit=10&match_columns=title,text&output_columns=snippet_html(title),snippet_html(text),categories,_key&query_flags=NONE&sortby=title&drilldown=categories&drilldown_limit=10&drilldown_output_columns=_id,_key,_nsubrecs&drilldown_sortby=_nsubrecs&query=BBB
+...
+~~~
+
+
+## ベンチマークを実行する
+
+以上で、準備が整いました。
+それではGroongaとDroongaのベンチマークを取得してみましょう。
+
+### Groongaのベンチマークを行う
+
+まず、比較対照としてGroongaでのベンチマーク結果を取得します。
+`node0`を比較対照用のGroongaサーバとしてセットアップ済みで、GroongaのHTTPサーバが停止している場合には、ベンチマークの実行前にあらかじめ起動しておいて下さい。
+
+~~~
+(on node0)
+% groonga -p 10041 -d --protocol http $HOME/groonga/db/db
+~~~
+
+ベンチマークは以下の要領で、`drnbench-request-response`コマンドを実行すると測定できます:
+
+~~~
+(on node3)
+% drnbench-request-response \
+    --step=2 \
+    --start-n-clients=0 \
+    --end-n-clients=20 \
+    --duration=30 \
+    --interval=10 \
+    --request-patterns-file=$PWD/patterns.txt \
+    --default-hosts=node0 \
+    --default-port=10041 \
+    --output-path=$PWD/groonga-result.csv
+~~~
+
+重要なパラメータは以下の通りです:
+
+ * `--step` は、各段階で増やす仮想クライアントの数です。
+ * `--start-n-clients` は、仮想クライアントの最初の数です。
+   例え`0`を指定したとしても、最初の実行時には必ず1つはクライアントが生成されます。
+ * `--end-n-clients` は、仮想クライアントの最大数です。
+   ベンチマークは、クライアントの数がこの上限に達するまでの間繰り返し実行されます。
+ * `--duration` は、1回あたりのベンチマークの実行にかける時間です。
+   この値は、結果が安定するまでに十分な長さの時間を指定するのが望ましいです。
+   筆者の場合は`30`(秒)が最適でした。
+ * `--interval` は、ベンチマークの合間に設ける待ち時間です。
+   これは、前回のベンチマークが終了するのに十分な長さの時間を指定するのが望ましいです。
+   筆者の場合は`10`(秒)が最適でした。
+ * `--request-patterns-file` は、パターンファイルへのパスです。
+ * `--default-hosts` は、リクエストの送信先のホスト名の一覧です。
+   複数のホストをカンマで区切って指定すると、ロードバランサーの動作をシミュレートすることもできます。
+ * `--default-port` は、リクエストの送信先のポート番号です。
+ * `--output-path` は、結果の出力先ファイルへのパスです。
+   すべてのベンチマークの統計情報が、この位置にファイルとして保存されます。
+
+ベンチマークの実行中は、`node0`のシステムの状態を`top`コマンドなどを使って監視しておきましょう。
+もしベンチマークがGroongaの性能を正しく引き出していれば、GroongaのプロセスはCPUをフルに使い切っているはずです(プロセッサ数4ならば`400%`、といった具合に)。
+そうでない場合は、何かがおかしいです。例えばネットワーク帯域が細すぎるのかもしれませんし、クライアントが非力すぎるのかもしれません。
+
+これで、対照用のGroongaでの結果を得る事ができます。
+
+結果が妥当かどうかを確かめるために、`status`コマンドの結果を確認しましょう:
+
+~~~
+% curl "http://node0:10041/d/status" | jq .
+[
+  [
+    0,
+    1412326645.19701,
+    3.76701354980469e-05
+  ],
+  {
+    "max_command_version": 2,
+    "alloc_count": 158,
+    "starttime": 1412326485,
+    "uptime": 160,
+    "version": "4.0.6",
+    "n_queries": 1000,
+    "cache_hit_rate": 0.49,
+    "command_version": 1,
+    "default_command_version": 1
+  }
+]
+~~~
+
+`"cache_hit_rate"`の値に注目してください。
+この値が想定されるキャッシュ率(例えば`0.5`)からかけ離れている場合、何かがおかしいです。例えば、リクエストパターンの数が少なすぎるかも知れません。
+キャッシュヒット率が高すぎる場合、結果のスループットは本来よりも高すぎる値になってしまいます。
+
+Droongaノードの上でGroongaを動かしている場合は、CPU資源とメモリ資源を解放するために、ベンチマーク取得後はGroongaを停止しておきましょう。
+
+~~~
+(on node0)
+% pkill groonga
+~~~
+
+### Droongaのベンチマークを行う
+
+#### 1ノード構成でのDroongaのベンチマーク
+
+ベンチマークの前に、ノードが1つだけの状態にクラスタを設定します。
+
+~~~
+(on node1, node2)
+% sudo service droonga-engine stop
+% sudo service droonga-http-server stop
+~~~
+
+~~~
+(on node0)
+% sudo droonga-engine-catalog-generate \
+    --hosts=node0
+% sudo service droonga-engine restart
+% sudo service droonga-http-server restart
+~~~
+
+前回のベンチマークの影響をなくすために、各ベンチマークの実行前にはサービスを再起動することをおすすめします。
+
+
+これにより、`node0`は1ノード構成のクラスタとして動作するようになります。
+実際にノードが1つだけ認識されていることを確認しましょう:
+
+~~~
+(on node3)
+% curl "http://node0:10042/droonga/system/status" | jq .
+{
+  "nodes": {
+    "node0:10031/droonga": {
+      "live": true
+    }
+  }
+}
+~~~
+
+ベンチマークを実行しましょう。
+
+~~~
+(on node3)
+% drnbench-request-response \
+    --step=2 \
+    --start-n-clients=0 \
+    --end-n-clients=20 \
+    --duration=30 \
+    --interval=10 \
+    --request-patterns-file=$PWD/patterns.txt \
+    --default-hosts=node0 \
+    --default-port=10042 \
+    --output-path=$PWD/droonga-result-1node.csv
+~~~
+
+デフォルトのポートが`10041`(GroongaのHTTPサーバのポート)から`10042`(Droongaのポート)に変わっていることに注意して下さい。
+結果の保存先のパスも変わっています。
+
+ベンチマークの実行中、`node0`のシステムの状態を`top`コマンドなどで監視しておきましょう。
+これはボトルネックの分析に役立ちます。
+
+また、結果が正しいかどうかを確かめるために、実際のキャッシュヒット率を確認しておきましょう:
+
+~~~
+% curl "http://node0:10042/statistics/cache" | jq .
+{
+  "hitRatio": 49.830717830807124,
+  "nHits": 66968,
+  "nGets": 134391
+}
+~~~
+
+`"hitRatio"`の値に注目してください。HTTPサーバにおける実際のキャッシュヒット率は、上記のようにパーセンテージで示されます(`49.830717830807124`という値はそのまま`49.830717830807124%`ということです)。
+もし値が期待されるキャッシュヒット率と大きく異なっている場合、何かがおかしいです。
+
+#### 2ノード構成でのDroongaのベンチマーク
+
+ベンチマークの前に、2番目のノードをクラスタに参加させます。
+
+~~~
+(on node0, node1)
+% sudo droonga-engine-catalog-generate \
+    --hosts=node0,node1
+% sudo service droonga-engine restart
+% sudo service droonga-http-server restart
+~~~
+
+これにより、`node0`と`node1`は2ノード構成のDroongaクラスタとして動作するようになります。
+実際にノードが2つ認識されていることを確認しましょう:
+
+~~~
+(on node3)
+% curl "http://node0:10042/droonga/system/status" | jq .
+{
+  "nodes": {
+    "node0:10031/droonga": {
+      "live": true
+    },
+    "node1:10031/droonga": {
+      "live": true
+    }
+  }
+}
+~~~
+
+ベンチマークを実行しましょう。
+
+~~~
+(on node3)
+% drnbench-request-response \
+    --step=2 \
+    --start-n-clients=0 \
+    --end-n-clients=20 \
+    --duration=30 \
+    --interval=10 \
+    --request-patterns-file=$PWD/patterns.txt \
+    --default-hosts=node0,node1 \
+    --default-port=10042 \
+    --output-path=$PWD/droonga-result-2nodes.csv
+~~~
+
+`--default-hosts` で2つのホストを指定していることに注意して下さい。
+
+今の所、`droonga-http-server`はシングルプロセスのため、すべてのリクエストを1つだけのホストに送ると`droonga-http-server`がボトルネックとなってしまいます。
+また、`droonga-http-server`と`droonga-engine`がCPU資源を奪い合うことにもなります。
+Droongaクラスタの性能を有効に測定するためには、各ノードのCPU使用率を平滑化する必要があります。
+
+もちろん、実際のプロダクション環境ではこのようなリクエストの分配はロードバランサーによって行われるべきですが、ベンチマークのためだけにロードバランサーを設定するのは煩雑です。
+`--default-hosts`オプションにカンマ区切りで複数のホスト名を指定することで、その代替とすることができます。
+
+また、結果の保存先のパスも変えています。
+
+ベンチマークの実行中、両方のノードのシステムの状態を監視することを忘れないでください。
+もし片方のノードだけに負荷がかかっていてもう片方がアイドル状態なのであれば、両者が1つのクラスタとして働いていないなどのように、何か異常が起こっていると分かります。
+すべてのノードの実際のキャッシュヒット率も忘れずに確認しておきましょう。
+
+#### 3ノード構成でのDroongaのベンチマーク
+
+ベンチマークの前に、最後のノードをクラスタに参加させましょう。
+
+~~~
+(on node0, node1)
+% sudo droonga-engine-catalog-generate \
+    --hosts=node0,node1,node2
+% sudo service droonga-engine restart
+% sudo service droonga-http-server restart
+~~~
+
+これで、`node0`, `node1`, `node2`のすべてのノードが3ノード構成のクラスタとして動作するようになります。
+実際にノードが3つ認識されていることを確認しましょう:
+
+~~~
+(on node3)
+% curl "http://node0:10042/droonga/system/status" | jq .
+{
+  "nodes": {
+    "node0:10031/droonga": {
+      "live": true
+    },
+    "node1:10031/droonga": {
+      "live": true
+    },
+    "node2:10031/droonga": {
+      "live": true
+    }
+  }
+}
+~~~
+
+ベンチマークを実行しましょう。
+
+~~~
+(on node3)
+% drnbench-request-response \
+    --step=2 \
+    --start-n-clients=0 \
+    --end-n-clients=20 \
+    --duration=30 \
+    --interval=10 \
+    --request-patterns-file=$PWD/patterns.txt \
+    --default-hosts=node0,node1,node2 \
+    --default-port=10042 \
+    --output-path=$PWD/droonga-result-3nodes.csv
+~~~
+
+また`--default-hosts`と`--output-path`の指定も変えていることに注意して下さい。
+各ノードのシステムの状態の監視と、実際のキャッシュヒット率の確認も忘れてはいけません。
+
+## 結果を分析する
+
+これで、手元に4つの結果が集まりました:
+
+ * `groonga-result.csv`
+ * `droonga-result-1node.csv`
+ * `droonga-result-2nodes.csv`
+ * `droonga-result-3nodes.csv`
+
+[先に述べた通り](#how-to-analyze)、これらを使って傾向を分析することができます。
+
+例えば、これらの結果は以下のようにグラフ化できます:
+
+![それぞれの場合のレイテンシーを重ねたグラフ](/images/tutorial/benchmark/latency-mixed-1.0.8.png)
+
+このレイテンシーのグラフは以下のように読み取れます:
+
+ * Droongaのレイテンシーの下限はGroongaのそれよりも大きい。
+   Droongaにはオーバーヘッドがある。
+ * 複数ノードのDroongaのレイテンシーはGroongaに比べると緩やかに増大している。
+   Droongaは余計な待ち時間無しでより多くのリクエストを同時に処理できる。
+
+![それぞれの場合のスループットを重ねたグラフ](/images/tutorial/benchmark/throughput-mixed-1.0.8.png)
+
+このスループットのグラフは以下のように読み取れます:
+
+ * GroongaのグラフとDroongaの単一ノード時のグラフは似通っている。
+   GroongaとDroongaの間での性能の劣化はごくわずかである。
+ * Droongaのスループット性能はノード数によって増大する。
+
+(注意: 性能測定の結果は様々な要因によって変動します。
+これはあくまで特定のバージョン、特定の環境での結果の例です。)
+
+## まとめ
+
+このチュートリアルでは、比較対照としての[Groonga][]サーバと、[Droonga]クラスタを用意しました。
+また、リクエストパターンを用意する手順、システムの性能の測定方法、結果の分析方法なども学びました。
+
+  [Ubuntu]: http://www.ubuntu.com/
+  [CentOS]: https://www.centos.org/
+  [Droonga]: https://droonga.org/
+  [Groonga]: http://groonga.org/
+  [drnbench]: https://github.com/droonga/drnbench/
+  [wikipedia-search]: https://github.com/droonga/wikipedia-search/
+  [command reference]: ../../reference/commands/

  Added: ja/tutorial/1.1.0/dump-restore/index.md (+582 -0) 100644
===================================================================
--- /dev/null
+++ ja/tutorial/1.1.0/dump-restore/index.md    2014-11-30 23:20:40 +0900 (38b9f2c)
@@ -0,0 +1,582 @@
+---
+title: "Droongaチュートリアル: データベースのバックアップと復元"
+layout: ja
+---
+
+{% comment %}
+##############################################
+  THIS FILE IS AUTOMATICALLY GENERATED FROM
+  "_po/ja/tutorial/1.1.0/dump-restore/index.po"
+  DO NOT EDIT THIS FILE MANUALLY!
+##############################################
+{% endcomment %}
+
+
+* TOC
+{:toc}
+
+## チュートリアルのゴール
+
+データのバックアップと復元を手動で行う際の手順を学ぶこと。
+
+## 前提条件
+
+* 何らかのデータが格納されている状態の[Droonga][]クラスタがあること。
+  このチュートリアルを始める前に、[「使ってみる」のチュートリアル](../groonga/)を完了しておいて下さい。
+
+このチュートリアルでは、[1つ前のチュートリアル](../groonga/)で準備した2つの既存のDroongaノード:`node0` (`192.168.100.50`) 、 `node1` (`192.168.100.51`) と、作業環境として使うもう1台のコンピュータ `node2` (`192.168.100.52`) があると仮定します。
+あなたの手元にあるDroongaノードがこれとは異なる名前である場合には、以下の説明の中の`node0`、`node1`、`node2`は実際の物に読み替えて下さい。
+
+## Droongaクラスタのデータをバックアップする
+
+### `drndump` のインストール
+
+最初に、作業マシンの`node2`にRubygems経由で `drndump` と名付けられたコマンドラインツールをインストールします:
+
+~~~
+# gem install drndump
+~~~
+
+その後、`drndump` コマンドが正しくインストールできたかどうかを確認します:
+
+~~~
+$ drndump --version
+drndump 1.0.0
+~~~
+
+### Droongaクラスタ内のデータをダンプする
+
+`drndump` コマンドはすべてのスキ−マ定義とデータをJSONs形式で取り出します。既存のDroongaクラスタのすべての内容をダンプ出力してみましょう。
+
+例えば、クラスタが `node0` (`192.168.100.50`) と `node1` (`192.168.100.51`) の2つのノードから構成されていて、別のホスト `node2` (`192.168.100.52`) にログインしている場合、コマンドラインは以下の要領です。
+
+~~~
+# drndump --host=node0 \
+           --receiver-host=node2
+{
+  "type": "table_create",
+  "dataset": "Default",
+  "body": {
+    "name": "Location",
+    "flags": "TABLE_PAT_KEY",
+    "key_type": "WGS84GeoPoint"
+  }
+}
+...
+{
+  "dataset": "Default",
+  "body": {
+    "table": "Store",
+    "key": "store9",
+    "values": {
+      "location": "146702531x-266363233",
+      "name": "Macy's 6th Floor - Herald Square - New York NY  (W)"
+    }
+  },
+  "type": "add"
+}
+{
+  "type": "column_create",
+  "dataset": "Default",
+  "body": {
+    "table": "Location",
+    "name": "store",
+    "type": "Store",
+    "flags": "COLUMN_INDEX",
+    "source": "location"
+  }
+}
+{
+  "type": "column_create",
+  "dataset": "Default",
+  "body": {
+    "table": "Term",
+    "name": "store_name",
+    "type": "Store",
+    "flags": "COLUMN_INDEX|WITH_POSITION",
+    "source": "name"
+  }
+}
+~~~
+
+以下の点に注意して下さい:
+
+ * `--host` オプションには、クラスタ内のいずれかのノードの正しいホスト名またはIPアドレスを指定します。
+ * `--receiver-host` オプションには、今操作しているコンピュータ自身の正しいホスト名またはIPアドレスを指定します。
+   この情報は、Droongaクラスタがメッセージを送り返すために使われます。
+ * コマンドの実行結果は、ダンプ出力元と同じ内容のデータセットを構築するのに必要なすべての情報を含んでいます。
+
+実行結果は標準出力に出力されます。
+結果をJSONs形式のファイルに保存する場合は、リダイレクトを使って以下のようにして下さい:
+
+~~~
+$ drndump --host=node0 \
+          --receiver-host=node2 \
+    > dump.jsons
+~~~
+
+
+## Droongaクラスタのデータを復元する
+
+### `droonga-client`のインストール
+
+`drndump` コマンドの実行結果は、Droonga用のメッセージの一覧です。
+
+Droongaクラスタにそれらのメッセージを送信するには、`droonga-send` コマンドを使います。
+このコマンドを含んでいるGemパッケージ `droonga-client` を、作業マシンである`node2`にインストールして下さい:
+
+~~~
+# gem install droonga-client
+~~~
+
+`droonga-send` コマンドが正しくインストールされた事を確認しましょう:
+
+~~~
+$ droonga-send --version
+droonga-send 0.2.0
+~~~
+
+### 空のDroongaクラスタを用意する
+
+2つのノード `node0` (`192.168.100.50`) と `node1` (`192.168.100.51`) からなる空のクラスタがあり、今 `node2` (`192.168.100.52`) にログインして操作を行っていて、ダンプファイルが `dump.jsons` という名前で手元にあると仮定します。
+
+もし順番にこのチュートリアルを読み進めているのであれば、クラスタとダンプファイルが既に手元にあるはずです。以下の操作でクラスタを空にしましょう:
+
+~~~
+$ endpoint="http://node0:10041"
+$ curl "$endpoint/d/table_remove?name=Location" | jq "."
+[
+  [
+    0,
+    1406610703.2229023,
+    0.0010793209075927734
+  ],
+  true
+]
+$ curl "$endpoint/d/table_remove?name=Store" | jq "."
+[
+  [
+    0,
+    1406610708.2757723,
+    0.006396293640136719
+  ],
+  true
+]
+$ curl "$endpoint/d/table_remove?name=Term" | jq "."
+[
+  [
+    0,
+    1406610712.379644,
+    6.723403930664062e-05
+  ],
+  true
+]
+~~~
+
+これでクラスタは空になりました。
+確かめてみましょう。
+以下のように、`select`と`table_list`コマンドは空の結果を返します:
+
+~~~
+$ curl "$endpoint/d/table_list" | jq "."
+[
+  [
+    0,
+    1406610804.1535122,
+    0.0002875328063964844
+  ],
+  [
+    [
+      [
+        "id",
+        "UInt32"
+      ],
+      [
+        "name",
+        "ShortText"
+      ],
+      [
+        "path",
+        "ShortText"
+      ],
+      [
+        "flags",
+        "ShortText"
+      ],
+      [
+        "domain",
+        "ShortText"
+      ],
+      [
+        "range",
+        "ShortText"
+      ],
+      [
+        "default_tokenizer",
+        "ShortText"
+      ],
+      [
+        "normalizer",
+        "ShortText"
+      ]
+    ]
+  ]
+]
+$ curl -X DELETE "$endpoint/cache" | jq "."
+true
+$ curl "$endpoint/d/select?table=Store&output_columns=name&limit=10" | jq "."
+[
+  [
+    0,
+    1401363465.610241,
+    0
+  ],
+  [
+    [
+      [
+        null
+      ],
+      []
+    ]
+  ]
+]
+~~~
+
+`select`コマンドにクエストを送る前に、まずキャッシュを削除しておくことに注意が必要です。
+これを怠ると、古い情報に基づいて、キャッシュされた結果が意図せず返されてしまいます。
+
+既定の設定では、レスポンスキャッシュは直近の100リクエストに対して保存され、保持期間は1分間です。
+上記のように、`/cache`のパスの位置にHTTPの`DELETE`のリクエストを送信すると、手動でレスポンスキャッシュを削除できます。
+
+### ダンプ結果から空のDroongaクラスタへデータを復元する
+
+`drndump` の実行結果はダンプ出力元と同じ内容のデータセットを作るために必要な情報をすべて含んでいます。そのため、クラスタが壊れた場合でも、ダンプファイルからクラスタを再構築する事ができます。
+やり方は単純で、単にダンプファイルを `droonga-send` コマンドを使ってからのクラスタに流し込むだけです。
+
+ダンプファイルからクラスタの内容を復元するには、以下のようなコマンドを実行します:
+
+~~~
+$ droonga-send --server=node0  \
+                    dump.jsons
+~~~
+
+注意:
+
+ * `--server` オプションには、クラスタ内のいずれかのノードの正しいホスト名またはIPアドレスを指定します。
+
+これで、データが完全に復元されました。確かめてみましょう:
+
+~~~
+$ curl -X DELETE "$endpoint/cache" | jq "."
+true
+$ curl "$endpoint/d/select?table=Store&output_columns=name&limit=10" | jq "."
+[
+  [
+    0,
+    1401363556.0294158,
+    7.62939453125e-05
+  ],
+  [
+    [
+      [
+        40
+      ],
+      [
+        [
+          "name",
+          "ShortText"
+        ]
+      ],
+      [
+        "1st Avenue & 75th St. - New York NY  (W)"
+      ],
+      [
+        "76th & Second - New York NY  (W)"
+      ],
+      [
+        "Herald Square- Macy's - New York NY"
+      ],
+      [
+        "Macy's 5th Floor - Herald Square - New York NY  (W)"
+      ],
+      [
+        "80th & York - New York NY  (W)"
+      ],
+      [
+        "Columbus @ 67th - New York NY  (W)"
+      ],
+      [
+        "45th & Broadway - New York NY  (W)"
+      ],
+      [
+        "Marriott Marquis - Lobby - New York NY"
+      ],
+      [
+        "Second @ 81st - New York NY  (W)"
+      ],
+      [
+        "52nd & Seventh - New York NY  (W)"
+      ]
+    ]
+  ]
+]
+~~~
+
+## 既存のクラスタを別の空のクラスタに直接複製する
+
+複数のDroongaクラスタが存在する場合、片方のクラスタの内容をもう片方のクラスタに複製することができます。
+`droonga-engine` パッケージは `droonga-engine-absorb-data` というユーティリティコマンドを含んでおり、これを使うと、既存のクラスタから別のクラスタへ直接データをコピーする事ができます。ローカルにダンプファイルを保存する必要がない場合には、この方法がおすすめです。
+
+### 複数のDroongaクラスタを用意する
+
+ノード `node0` (`192.168.100.50`) を含む複製元クラスタと、ノード `node1' (`192.168.100.51`) を含む複製先クラスタの2つのクラスタがあると仮定します。
+
+もし順番にこのチュートリアルを読み進めているのであれば、2つのノードを含むクラスタが手元にあるはずです。`droonga-engine-catalog-modify` を使って2つのクラスタを作り、1つを空にしましょう。手順は以下の通りです:
+
+~~~
+(on node0)
+# droonga-engine-catalog-modify --replica-hosts=node0
+~~~
+
+~~~
+(on node1)
+# droonga-engine-catalog-modify --replica-hosts=node1
+$ endpoint="http://node1:10041"
+$ curl "$endpoint/d/table_remove?name=Location"
+$ curl "$endpoint/d/table_remove?name=Store"
+$ curl "$endpoint/d/table_remove?name=Term"
+~~~
+
+これで、ノード `node0` を含む複製元クラスタと、ノード `node1` を含む複製先の空のクラスタの、2つのクラスタができました。確かめてみましょう:
+
+
+~~~
+$ curl "http://node0:10041/droonga/system/status" | jq "."
+{
+  "nodes": {
+    "node0:10031/droonga": {
+      "live": true
+    }
+  }
+}
+$ curl -X DELETE "http://node0:10041/cache" | jq "."
+true
+$ curl "http://node0:10041/d/select?table=Store&output_columns=name&limit=10" | jq "."
+[
+  [
+    0,
+    1401363556.0294158,
+    7.62939453125e-05
+  ],
+  [
+    [
+      [
+        40
+      ],
+      [
+        [
+          "name",
+          "ShortText"
+        ]
+      ],
+      [
+        "1st Avenue & 75th St. - New York NY  (W)"
+      ],
+      [
+        "76th & Second - New York NY  (W)"
+      ],
+      [
+        "Herald Square- Macy's - New York NY"
+      ],
+      [
+        "Macy's 5th Floor - Herald Square - New York NY  (W)"
+      ],
+      [
+        "80th & York - New York NY  (W)"
+      ],
+      [
+        "Columbus @ 67th - New York NY  (W)"
+      ],
+      [
+        "45th & Broadway - New York NY  (W)"
+      ],
+      [
+        "Marriott Marquis - Lobby - New York NY"
+      ],
+      [
+        "Second @ 81st - New York NY  (W)"
+      ],
+      [
+        "52nd & Seventh - New York NY  (W)"
+      ]
+    ]
+  ]
+]
+$ curl "http://node1:10041/droonga/system/status" | jq "."
+{
+  "nodes": {
+    "node1:10031/droonga": {
+      "live": true
+    }
+  }
+}
+$ curl -X DELETE "http://node1:10041/cache" | jq "."
+true
+$ curl "http://node1:10041/d/select?table=Store&output_columns=name&limit=10" | jq "."
+[
+  [
+    0,
+    1401363465.610241,
+    0
+  ],
+  [
+    [
+      [
+        null
+      ],
+      []
+    ]
+  ]
+]
+~~~
+
+`droonga-http-server`は同じコンピュータ上の`droonga-engine`に関連付けられていることに注意してください。
+上記の手順でクラスタを2つに分割した後は、`node0`の`droonga-http-server`は`node0`の`droonga-engine`とだけ通信し、`node1`の`droonga-http-server`は`node1`の`droonga-engine`とだけ通信します。
+詳しくは次のチュートリアルも参照して下さい。
+
+
+### 2つのDroongaクラスタの間でデータを複製する
+
+2つのクラスタの間でデータをコピーするには、いずれかのノード上で以下のように `droonga-engine-absorb-data` コマンドを実行します:
+
+~~~
+(on node1)
+$ droonga-engine-absorb-data --source-host=node0 \
+                             --destination-host=node1 \
+                             --receiver-host=node1
+Start to absorb data from node0
+                       to node1
+                      via node1 (this host)
+  dataset = Default
+  port    = 10031
+  tag     = droonga
+
+Absorbing...
+...
+Done.
+~~~
+
+このコマンドは、以下のようにして別のノード上で実行することもできます:
+
+~~~
+(on node2)
+$ droonga-engine-absorb-data --source-host=node0 \
+                             --destination-host=node1 \
+                             --receiver-host=node2
+Start to absorb data from node0
+                       to node1
+                      via node2 (this host)
+...
+~~~
+
+この時、コマンドを実行するノードのホスト名かIPアドレスを`--receiver-host`オプションで指定する必要があることに注意してください。
+
+以上の操作で、2つのクラスタの内容が完全に同期されました。確かめてみましょう:
+
+~~~
+$ curl -X DELETE "http://node1:10041/cache" | jq "."
+true
+$ curl "http://node1:10041/d/select?table=Store&output_columns=name&limit=10" | jq "."
+[
+  [
+    0,
+    1401363556.0294158,
+    7.62939453125e-05
+  ],
+  [
+    [
+      [
+        40
+      ],
+      [
+        [
+          "name",
+          "ShortText"
+        ]
+      ],
+      [
+        "1st Avenue & 75th St. - New York NY  (W)"
+      ],
+      [
+        "76th & Second - New York NY  (W)"
+      ],
+      [
+        "Herald Square- Macy's - New York NY"
+      ],
+      [
+        "Macy's 5th Floor - Herald Square - New York NY  (W)"
+      ],
+      [
+        "80th & York - New York NY  (W)"
+      ],
+      [
+        "Columbus @ 67th - New York NY  (W)"
+      ],
+      [
+        "45th & Broadway - New York NY  (W)"
+      ],
+      [
+        "Marriott Marquis - Lobby - New York NY"
+      ],
+      [
+        "Second @ 81st - New York NY  (W)"
+      ],
+      [
+        "52nd & Seventh - New York NY  (W)"
+      ]
+    ]
+  ]
+]
+~~~
+
+### 2つのDroongaクラスタを結合する
+
+これらの2つのクラスタを結合するために、以下のコマンド列を実行しましょう:
+
+~~~
+(on node0)
+# droonga-engine-catalog-modify --add-replica-hosts=node1
+~~~
+
+~~~
+(on node1)
+# droonga-engine-catalog-modify --add-replica-hosts=node0
+~~~
+
+これで、1つだけクラスタがある状態になりました。最初の状態に戻ったという事になります。
+
+~~~
+$ curl "http://node0:10041/droonga/system/status" | jq "."
+{
+  "nodes": {
+    "node0:10031/droonga": {
+      "live": true
+    },
+    "node1:10031/droonga": {
+      "live": true
+    }
+  }
+}
+~~~
+
+## まとめ
+
+このチュートリアルでは、[Droonga][]クラスタのバックアップとデータの復元の方法を実践しました。
+また、既存のDroongaクラスタの内容を別の空のクラスタへ複製する方法も実践しました。
+
+続いて、[既存のDroongaクラスタに新しいreplicaを追加する手順](../add-replica/)を学びましょう。
+
+  [Ubuntu]: http://www.ubuntu.com/
+  [Droonga]: https://droonga.org/
+  [Groonga]: http://groonga.org/
+  [command reference]: ../../reference/commands/

  Added: ja/tutorial/1.1.0/groonga/index.md (+980 -0) 100644
===================================================================
--- /dev/null
+++ ja/tutorial/1.1.0/groonga/index.md    2014-11-30 23:20:40 +0900 (c9706be)
@@ -0,0 +1,980 @@
+---
+title: "Droongaチュートリアル: 使ってみる/Groongaからの移行手順"
+layout: ja
+---
+
+{% comment %}
+##############################################
+  THIS FILE IS AUTOMATICALLY GENERATED FROM
+  "_po/ja/tutorial/1.1.0/groonga/index.po"
+  DO NOT EDIT THIS FILE MANUALLY!
+##############################################
+{% endcomment %}
+
+
+* TOC
+{:toc}
+
+## チュートリアルのゴール
+
+Droongaクラスタを自分で構築して、[Groonga][groonga]互換のサーバとして利用できるようにするための手順を学ぶこと。
+
+## 前提条件
+
+* [Ubuntu][]または[CentOS][]サーバのセットアップと操作について、基本的な知識と経験があること。
+* [Groonga][groonga]のHTTP経由での利用について、基本的な知識と経験があること。
+
+## Droongaとは何か?
+
+Droongaは分散アーキテクチャに基づくデータ処理エンジンで、「distributed-Groonga」がその名の由来です。
+名前が示す通り、Droongaはいくつかの点での改善(具体的には、レプリケーションとシャーディング)を含んだGroonga互換のサーバとして動作することができます。
+
+アーキテクチャ、設計、APIなどの点で、DroongaはGroongaと大きく異なっています。
+しかしながら、Droongaを単にGroonga互換のサーバとして使う限りにおいては、Droongaのアーキテクチャ全体を理解する必要はありません。
+
+例として、[ニューヨークにあるスターバックスの店舗](http://geocommons.com/overlays/430038)を検索できるデータベースシステムを作成することにします。
+
+## Droongaクラスタをセットアップする
+
+Droongaベースのデータベースシステムは、*Droongaクラスタ*と呼ばれます。
+この節では、Droongaクラスタを0から構築する方法を解説します。
+
+### Droongaノード用のコンピュータを用意する
+
+Droongaクラスタは、*Droongaノード*と呼ばれる1つ以上のコンピュータによって構成されます。
+まず、Droongaノードにするためのコンピュータを用意しましょう。
+
+このチュートリアルは、既存のコンピュータを使ってDroongaクラスタを構築する手順について解説しています。
+以下の説明は基本的には、[DigitalOcean](https://www.digitalocean.com/)上のサーバで`Ubuntu 14.04 x64`または`CentOS 7 x64`の仮想マシンが正しく準備されており、コンソールが利用できる状態になっている、という前提に基づいています。
+
+単にDroongaを試したいだけの場合は、[自分のコンピュータ上に複数台の仮想マシンを用意する手順の解説](../virtual-machines-for-experiments/)も参照してみて下さい。
+
+注意:
+
+ * Droongaの依存パッケージをインストールする前に、仮想マシンのインスタンスが少なくとも2GB以上のメモリを備えていることを確認して下さい。
+   メモリが足りないと、パッケージのインストール中にネイティブ拡張のビルドに失敗する場合があります。
+ * `hostname -f`で報告されるホスト名、または`hostname -i`で報告されるIPアドレスが、クラスタ内の他のコンピュータからアクセス可能なものであることを確認して下さい。
+ * `curl`コマンドと`jq`コマンドがインストールされていることを確認して下さい。
+   `curl`はインストールスクリプトをダウンロードするために必要です。
+   `jq`はインストールのためには必要ではありませんが、Droongaが返却するJSON形式のレスポンスを読むのに役立つでしょう。
+
+有効なレプリケーションを実現するためには2台以上のコンピュータを用意する必要があります。
+ですので、このチュートリアルでは以下のような2台のコンピュータがあると仮定して説明を進めます:
+
+ * IPアドレスが`192.168.100.50`で、ホスト名が`node0`であるコンピュータ。
+ * IPアドレスが`192.168.100.51`で、ホスト名が`node1`であるコンピュータ。
+
+## コンピュータをDroongaノードとしてセットアップする
+
+Groongaはバイナリのパッケージを提供しているため、環境によっては簡単にインストールできます。
+([Groongaのインストール手順](http://groonga.org/docs/install.html)を参照)
+
+それに対し、コンピュータをDroongaノードとしてセットアップする手順は以下の通りです:
+
+ 1. `droonga-engine`をインストールする。
+ 2. `droonga-http-server`をインストールする。
+ 3. そのノードを他のノードと協調して動作するように設定する。
+
+上記の手順を各コンピュータに対して実施する必要があることに注意して下さい。
+しかしながら、各手順は非常に簡単です。
+
+それでは、`node0` (`192.168.100.50`)にログインしてDroongaの構成コンポーネントをインストールしましょう。
+
+まず、`droonga-engine`をインストールします。
+これはDroongaシステムの主要な機能を提供する、核となるコンポーネントです。
+インストールスクリプトをダウンロードし、root権限で`bash`で実行して下さい:
+
+~~~
+# curl https://raw.githubusercontent.com/droonga/droonga-engine/master/install.sh | \
+    bash
+...
+Installing droonga-engine from RubyGems...
+...
+Preparing the user...
+...
+Setting up the configuration directory...
+This node is configured with a hostname XXXXXXXX.
+
+Registering droonga-engine as a service...
+...
+Successfully installed droonga-engine.
+~~~
+
+そのノード自身の名前(コンピュータのホスト名から推測されたもの)がメッセージの中に出力されていることに注意して下さい。
+*この名前は様々な場面で使われます*ので、*各ノードの名前が何であるかを忘れないようにして下さい*。
+
+次に、`droonga-http-server`をインストールします。
+これはHTTPのリクエストをDroongaネイティブのリクエストに変換するために必要な、フロントエンドとなるコンポーネントです。
+インストールスクリプトをダウンロードし、root権限で`bash`で実行して下さい:
+
+~~~
+# curl https://raw.githubusercontent.com/droonga/droonga-http-server/master/install.sh | \
+    bash
+...
+Installing droonga-http-server from npmjs.org...
+...
+Preparing the user...
+...
+Setting up the configuration directory...
+The droonga-engine service is detected on this node.
+The droonga-http-server is configured to be connected
+to this node (XXXXXXXX).
+This node is configured with a hostname XXXXXXXX.
+
+Registering droonga-http-server as a service...
+...
+Successfully installed droonga-http-server.
+~~~
+
+ここまでの操作が終わったら、同じ操作をもう1台のコンピュータ `node1` (`192.168.100.51`) に対しても行います。
+これで、無事に2台のコンピュータをDroongaノードとして動作させるための準備が整いました。
+
+### コンピュータが他のコンピュータからアクセスできるホスト名を持っていない場合…… {#accessible-host-name}
+
+各Droongaノードは、他のノードと通信するために、そのノード自身の*アクセス可能な名前*を把握している必要があります。
+
+インストールスクリプトはそのノードのアクセス可能なホスト名を自動的に推測します。
+どのような値がそのノード自身のホスト名として認識されたかは、以下の手順で確認できます:
+
+~~~
+# cat ~droonga-engine/droonga/droonga-engine.yaml | grep host
+host: XXXXXXXX
+~~~
+
+しかしながら、そのコンピュータが適切に設定されていないと、この自動認識に失敗することがあります。
+例えば、そのノードのホスト名が`node0`であると設定されているにも関わらず、他のノードが`node0`というホスト名から実際のIPアドレスを名前解決できないと、そのノードは他のノードから送られてくるメッセージを何も受信することができません。
+
+そのような場合、以下のようにして、そのノード自身のIPアドレスを使ってノードを再設定する必要があります:
+
+~~~
+(on node0=192.168.100.50)
+# host=192.168.100.50
+# droonga-engine-configure --quiet --reset-config --reset-catalog \
+                           --host=$host
+# droonga-http-server-configure --quiet --reset-config \
+                                --droonga-engine-host-name=$host \
+                                --receive-host-name=$host
+
+(on node1=192.168.100.51)
+# host=192.168.100.51
+...
+~~~
+
+この操作により、コンピュータ `node0` は `192.168.100.50` というホスト名のDroongaノード、コンピュータ `node1` は `192.168.100.51` というホスト名のDroongaノードとして設定されます。
+前述した通り、*ここで設定された名前は様々な場面で使われます*ので、*各ノードの名前が何であるかを忘れないようにして下さい*。
+
+このチュートリアルでは、各コンピュータはお互いのホスト名`node0`と`node1`を正しく名前解決できるものと仮定します。
+あなたの環境ではホスト名の解決ができないという場合には、以下の説明の中の`node0`と`node1`は、実際のIPアドレス(例えば`192.168.100.50`と`192.168.100.51`)に読み替えて下さい。
+
+ちなみに、インストールスクリプトに対しても、以下のように、環境変数を使って任意の値をホスト名として指定することができます:
+
+~~~
+(on node0=192.168.100.50)
+# host=192.168.100.50
+# curl https://raw.githubusercontent.com/droonga/droonga-engine/master/install.sh | \
+    HOST=$host bash
+# curl https://raw.githubusercontent.com/droonga/droonga-http-server/master/install.sh | \
+    ENGINE_HOST=$host HOST=$host bash
+
+(on node1=192.168.100.51)
+# host=192.168.100.51
+...
+~~~
+
+この方法は、使おうとしているコンピュータがお互いのホスト名を名前解決できないことがあらかじめ分かっている場合に便利でしょう。
+
+### 各ノードをクラスタとして動作するように設定する
+
+現時点で、これらのノードはまだ個別に動作する状態になっています。
+それでは、これらを1つのクラスタとして動作するように設定しましょう。
+
+以下のようなコマンドを各ノードで実行して下さい:
+
+~~~
+# droonga-engine-catalog-generate --hosts=node0,node1
+~~~
+
+当然ながら、`--hosts`オプションには各ノードの正しいホスト名を指定する必要があります。
+もしこれらのノードがIPアドレスをホスト名として設定されている場合には、コマンド列は以下のようになります:
+
+~~~
+# droonga-engine-catalog-generate --hosts=192.168.100.50,192.168.100.51
+~~~
+
+これで、Droongaクラスタの準備が完了しました。
+2つのノードは1つのDroongaクラスタとして動作するための準備が完了しています。
+
+引き続き、[クラスタの使い方の説明](#use)に進みましょう。
+
+
+## DroongaクラスタをHTTP経由で使用する
+
+### 各Droongaノードの上でのサービスの開始と停止
+
+GroongaをHTTPサーバとして使う場合は、以下のように `-d` オプションを指定するだけでサーバを起動できます:
+
+~~~
+# groonga -p 10041 -d --protocol http /tmp/databases/db
+~~~
+
+一方、DroongaクラスタをHTTP経由で使うためには、各Droongaノードにおいて複数のサーバ・デーモンを起動する必要があります。
+
+Droongaノードをインストールスクリプトを使ってセットアップした場合、デーモンは既に、`service`コマンドによって管理されるシステムのサービスとして設定されています。
+サービスを起動するには、以下のようなコマンドを各Droongaノードで実行して下さい:
+
+~~~
+# service droonga-engine start
+# service droonga-http-server start
+~~~
+
+これらのコマンドにより、各サービスが動作し始めます。
+これで、2つのノードは1つのクラスタを形成し、お互いの状態を監視し合う状態になりました。
+もしノードが1つ停止しても、他のノードが生存していれば、それらの生存ノードだけでDroongaクラスタは動作し続けます。
+ですので、秘密裏のうちに機能停止したノードを復旧したりクラスタに復帰させたりすることができます。
+
+クラスタが動作している事を、`system.status` コマンドを使って確認してみましょう。
+コマンドはHTTP経由で実行できます:
+
+~~~
+$ curl "http://node0:10041/droonga/system/status" | jq "."
+{
+  "nodes": {
+    "node0:10031/droonga": {
+      "live": true
+    },
+    "node1:10031/droonga": {
+      "live": true
+    }
+  }
+}
+~~~
+
+この結果は、2つのノードが正常に動作している事を示しています。
+Droongaはクラスタで動作するので、他のエンドポイントも同じ結果を返します。
+
+~~~
+$ curl "http://node1:10041/droonga/system/status" | jq "."
+{
+  "nodes": {
+    "node0:10031/droonga": {
+      "live": true
+    },
+    "node1:10031/droonga": {
+      "live": true
+    }
+  }
+}
+~~~
+
+`droonga-http-server`はクラスタ内のすべての`droonga-engine`に接続し、ロードバランサーのように、リクエストをそれらへ分配します。
+また、もしいくつかの`droonga-engine`が停止しても、`droonga-http-server`はそれらの死んだノードを自動的に回避するため、クラスタは正常に動作し続けます。
+
+サービスを停止するには、以下のコマンドを各Droongaノード上で実行します:
+
+~~~
+# service droonga-engine stop
+# service droonga-http-server stop
+~~~
+
+確認が終わったら、再度サービスを起動しておきましょう:
+
+### テーブル、カラム、インデックスの作成
+
+以上の手順で、Groonga HTTPサーバ互換のHTTPサーバとして動作するDroongaクラスタができました。
+
+リクエストの送信方法はGroongaサーバの場合と全く同じです。
+新しいテーブル `Store` を作るには、`table_create` コマンドにあたるGETリクエストを送信して下さい:
+
+~~~
+$ endpoint="http://node0:10041"
+$ curl "$endpoint/d/table_create?name=Store&flags=TABLE_PAT_KEY&key_type=ShortText" | jq "."
+[
+  [
+    0,
+    1401358896.360356,
+    0.0035653114318847656
+  ],
+  true
+]
+~~~
+
+リクエストの送信先として、Droongaノード中でdroonga-http-serverが動作しているDroongaノードのどれか1つを指定する必要がある事に注意して下さい。
+言い換えると、接続先(エンドポイント)としてはクラスタ中のどのノードでも好きな物を使う事ができます。
+すべてのリクエストは、クラスタ中の適切なノードに配送されます。
+
+さて、テーブルを正しく作成できました。
+`table_list` コマンドを使って、作成されたテーブルの情報を見てみましょう:
+
+~~~
+$ curl "$endpoint/d/table_list" | jq "."
+[
+  [
+    0,
+    1401358908.9126804,
+    0.001600027084350586
+  ],
+  [
+    [
+      [
+        "id",
+        "UInt32"
+      ],
+      [
+        "name",
+        "ShortText"
+      ],
+      [
+        "path",
+        "ShortText"
+      ],
+      [
+        "flags",
+        "ShortText"
+      ],
+      [
+        "domain",
+        "ShortText"
+      ],
+      [
+        "range",
+        "ShortText"
+      ],
+      [
+        "default_tokenizer",
+        "ShortText"
+      ],
+      [
+        "normalizer",
+        "ShortText"
+      ]
+    ],
+    [
+      256,
+      "Store",
+      "/home/vagrant/droonga/000/db.0000100",
+      "TABLE_PAT_KEY|PERSISTENT",
+      "ShortText",
+      null,
+      null,
+      null
+    ]
+  ]
+]
+~~~
+
+Droongaはクラスタで動作するので、他のエンドポイントも同じ結果を返します。
+
+~~~
+$ curl "http://node1:10041/d/table_list" | jq "."
+[
+  [
+    0,
+    1401358908.9126804,
+    0.001600027084350586
+  ],
+  [
+    [
+      [
+        "id",
+        "UInt32"
+      ],
+      [
+        "name",
+        "ShortText"
+      ],
+      [
+        "path",
+        "ShortText"
+      ],
+      [
+        "flags",
+        "ShortText"
+      ],
+      [
+        "domain",
+        "ShortText"
+      ],
+      [
+        "range",
+        "ShortText"
+      ],
+      [
+        "default_tokenizer",
+        "ShortText"
+      ],
+      [
+        "normalizer",
+        "ShortText"
+      ]
+    ],
+    [
+      256,
+      "Store",
+      "/home/vagrant/droonga/000/db.0000100",
+      "TABLE_PAT_KEY|PERSISTENT",
+      "ShortText",
+      null,
+      null,
+      null
+    ]
+  ]
+]
+~~~
+
+次は、`column_create` コマンドを使って `Store` テーブルに `name` と `location` という新しいカラムを作ります:
+
+~~~
+$ curl "$endpoint/d/column_create?table=Store&name=name&flags=COLUMN_SCALAR&type=ShortText" | jq "."
+[
+  [
+    0,
+    1401358348.6541538,
+    0.0004096031188964844
+  ],
+  true
+]
+$ curl "$endpoint/d/column_create?table=Store&name=location&flags=COLUMN_SCALAR&type=WGS84GeoPoint" | jq "."
+[
+  [
+    0,
+    1401358359.084659,
+    0.002511262893676758
+  ],
+  true
+]
+~~~
+
+インデックスも作成しましょう。
+
+~~~
+$ curl "$endpoint/d/table_create?name=Term&flags=TABLE_PAT_KEY&key_type=ShortText&default_tokenizer=TokenBigram&normalizer=NormalizerAuto" | jq "."
+[
+  [
+    0,
+    1401358475.7229664,
+    0.002419710159301758
+  ],
+  true
+]
+$ curl "$endpoint/d/column_create?table=Term&name=store_name&flags=COLUMN_INDEX|WITH_POSITION&type=Store&source=name" | jq "."
+[
+  [
+    0,
+    1401358494.1656318,
+    0.006799221038818359
+  ],
+  true
+]
+$ curl "$endpoint/d/table_create?name=Location&flags=TABLE_PAT_KEY&key_type=WGS84GeoPoint" | jq "."
+[
+  [
+    0,
+    1401358505.708896,
+    0.0016951560974121094
+  ],
+  true
+]
+$ curl "$endpoint/d/column_create?table=Location&name=store&flags=COLUMN_INDEX&type=Store&source=location" | jq "."
+[
+  [
+    0,
+    1401358519.6187897,
+    0.024788379669189453
+  ],
+  true
+]
+~~~
+
+結果を確認してみましょう:
+
+~~~
+$ curl "$endpoint/d/table_list" | jq "."
+[
+  [
+    0,
+    1416390011.7194495,
+    0.0015704631805419922
+  ],
+  [
+    [
+      [
+        "id",
+        "UInt32"
+      ],
+      [
+        "name",
+        "ShortText"
+      ],
+      [
+        "path",
+        "ShortText"
+      ],
+      [
+        "flags",
+        "ShortText"
+      ],
+      [
+        "domain",
+        "ShortText"
+      ],
+      [
+        "range",
+        "ShortText"
+      ],
+      [
+        "default_tokenizer",
+        "ShortText"
+      ],
+      [
+        "normalizer",
+        "ShortText"
+      ]
+    ],
+    [
+      261,
+      "Location",
+      "/home/droonga-engine/droonga/databases/000/db.0000105",
+      "TABLE_PAT_KEY|PERSISTENT",
+      "WGS84GeoPoint",
+      null,
+      null,
+      null
+    ],
+    [
+      256,
+      "Store",
+      "/home/droonga-engine/droonga/databases/000/db.0000100",
+      "TABLE_PAT_KEY|PERSISTENT",
+      "ShortText",
+      null,
+      null,
+      null
+    ],
+    [
+      259,
+      "Term",
+      "/home/droonga-engine/droonga/databases/000/db.0000103",
+      "TABLE_PAT_KEY|PERSISTENT",
+      "ShortText",
+      null,
+      "TokenBigram",
+      "NormalizerAuto"
+    ]
+  ]
+]
+$ curl "$endpoint/d/column_list?table=Store" | jq "."
+[
+  [
+    0,
+    1416390069.515929,
+    0.001077413558959961
+  ],
+  [
+    [
+      [
+        "id",
+        "UInt32"
+      ],
+      [
+        "name",
+        "ShortText"
+      ],
+      [
+        "path",
+        "ShortText"
+      ],
+      [
+        "type",
+        "ShortText"
+      ],
+      [
+        "flags",
+        "ShortText"
+      ],
+      [
+        "domain",
+        "ShortText"
+      ],
+      [
+        "range",
+        "ShortText"
+      ],
+      [
+        "source",
+        "ShortText"
+      ]
+    ],
+    [
+      256,
+      "_key",
+      "",
+      "",
+      "COLUMN_SCALAR",
+      "Store",
+      "ShortText",
+      []
+    ],
+    [
+      258,
+      "location",
+      "/home/droonga-engine/droonga/databases/000/db.0000102",
+      "fix",
+      "COLUMN_SCALAR",
+      "Store",
+      "WGS84GeoPoint",
+      []
+    ],
+    [
+      257,
+      "name",
+      "/home/droonga-engine/droonga/databases/000/db.0000101",
+      "var",
+      "COLUMN_SCALAR",
+      "Store",
+      "ShortText",
+      []
+    ]
+  ]
+]
+$ curl "$endpoint/d/column_list?table=Term" | jq "."
+[
+  [
+    0,
+    1416390110.143951,
+    0.0013172626495361328
+  ],
+  [
+    [
+      [
+        "id",
+        "UInt32"
+      ],
+      [
+        "name",
+        "ShortText"
+      ],
+      [
+        "path",
+        "ShortText"
+      ],
+      [
+        "type",
+        "ShortText"
+      ],
+      [
+        "flags",
+        "ShortText"
+      ],
+      [
+        "domain",
+        "ShortText"
+      ],
+      [
+        "range",
+        "ShortText"
+      ],
+      [
+        "source",
+        "ShortText"
+      ]
+    ],
+    [
+      259,
+      "_key",
+      "",
+      "",
+      "COLUMN_SCALAR",
+      "Term",
+      "ShortText",
+      []
+    ],
+    [
+      260,
+      "store_name",
+      "/home/droonga-engine/droonga/databases/000/db.0000104",
+      "index",
+      "COLUMN_INDEX|WITH_POSITION",
+      "Term",
+      "Store",
+      [
+        "name"
+      ]
+    ]
+  ]
+]
+$ curl "$endpoint/d/column_list?table=Location" | jq "."
+[
+  [
+    0,
+    1416390163.0140722,
+    0.0009713172912597656
+  ],
+  [
+    [
+      [
+        "id",
+        "UInt32"
+      ],
+      [
+        "name",
+        "ShortText"
+      ],
+      [
+        "path",
+        "ShortText"
+      ],
+      [
+        "type",
+        "ShortText"
+      ],
+      [
+        "flags",
+        "ShortText"
+      ],
+      [
+        "domain",
+        "ShortText"
+      ],
+      [
+        "range",
+        "ShortText"
+      ],
+      [
+        "source",
+        "ShortText"
+      ]
+    ],
+    [
+      261,
+      "_key",
+      "",
+      "",
+      "COLUMN_SCALAR",
+      "Location",
+      "WGS84GeoPoint",
+      []
+    ],
+    [
+      262,
+      "store",
+      "/home/droonga-engine/droonga/databases/000/db.0000106",
+      "index",
+      "COLUMN_INDEX",
+      "Location",
+      "Store",
+      [
+        "location"
+      ]
+    ]
+  ]
+]
+~~~
+
+### テーブルへのデータの読み込み
+
+それでは、`Store` テーブルにデータを読み込みましょう。
+まず、データを `stores.json` という名前のJSON形式のファイルとして用意します。
+
+stores.json:
+
+~~~
+[
+["_key","name","location"],
+["store0","1st Avenue & 75th St. - New York NY  (W)","40.770262,-73.954798"],
+["store1","76th & Second - New York NY  (W)","40.771056,-73.956757"],
+["store2","2nd Ave. & 9th Street - New York NY","40.729445,-73.987471"],
+["store3","15th & Third - New York NY  (W)","40.733946,-73.9867"],
+["store4","41st and Broadway - New York NY  (W)","40.755111,-73.986225"],
+["store5","84th & Third Ave - New York NY  (W)","40.777485,-73.954979"],
+["store6","150 E. 42nd Street - New York NY  (W)","40.750784,-73.975582"],
+["store7","West 43rd and Broadway - New York NY  (W)","40.756197,-73.985624"],
+["store8","Macy's 35th Street Balcony - New York NY","40.750703,-73.989787"],
+["store9","Macy's 6th Floor - Herald Square - New York NY  (W)","40.750703,-73.989787"],
+["store10","Herald Square- Macy's - New York NY","40.750703,-73.989787"],
+["store11","Macy's 5th Floor - Herald Square - New York NY  (W)","40.750703,-73.989787"],
+["store12","80th & York - New York NY  (W)","40.772204,-73.949862"],
+["store13","Columbus @ 67th - New York NY  (W)","40.774009,-73.981472"],
+["store14","45th & Broadway - New York NY  (W)","40.75766,-73.985719"],
+["store15","Marriott Marquis - Lobby - New York NY","40.759123,-73.984927"],
+["store16","Second @ 81st - New York NY  (W)","40.77466,-73.954447"],
+["store17","52nd & Seventh - New York NY  (W)","40.761829,-73.981141"],
+["store18","1585 Broadway (47th) - New York NY  (W)","40.759806,-73.985066"],
+["store19","85th & First - New York NY  (W)","40.776101,-73.949971"],
+["store20","92nd & 3rd - New York NY  (W)","40.782606,-73.951235"],
+["store21","165 Broadway - 1 Liberty - New York NY  (W)","40.709727,-74.011395"],
+["store22","1656 Broadway - New York NY  (W)","40.762434,-73.983364"],
+["store23","54th & Broadway - New York NY  (W)","40.764275,-73.982361"],
+["store24","Limited Brands-NYC - New York NY","40.765219,-73.982025"],
+["store25","19th & 8th - New York NY  (W)","40.743218,-74.000605"],
+["store26","60th & Broadway-II - New York NY  (W)","40.769196,-73.982576"],
+["store27","63rd & Broadway - New York NY  (W)","40.771376,-73.982709"],
+["store28","195 Broadway - New York NY  (W)","40.710703,-74.009485"],
+["store29","2 Broadway - New York NY  (W)","40.704538,-74.01324"],
+["store30","2 Columbus Ave. - New York NY  (W)","40.769262,-73.984764"],
+["store31","NY Plaza - New York NY  (W)","40.702802,-74.012784"],
+["store32","36th and Madison - New York NY  (W)","40.748917,-73.982683"],
+["store33","125th St. btwn Adam Clayton & FDB - New York NY","40.808952,-73.948229"],
+["store34","70th & Broadway - New York NY  (W)","40.777463,-73.982237"],
+["store35","2138 Broadway - New York NY  (W)","40.781078,-73.981167"],
+["store36","118th & Frederick Douglas Blvd. - New York NY  (W)","40.806176,-73.954109"],
+["store37","42nd & Second - New York NY  (W)","40.750069,-73.973393"],
+["store38","Broadway @ 81st - New York NY  (W)","40.784972,-73.978987"],
+["store39","Fashion Inst of Technology - New York NY","40.746948,-73.994557"]
+]
+~~~
+
+データが準備できたら、`load` コマンドのPOSTリクエストとして送信します:
+
+~~~
+$ curl --data "@stores.json" "$endpoint/d/load?table=Store" | jq "."
+[
+  [
+    0,
+    1401358564.909,
+    0.158
+  ],
+  [
+    40
+  ]
+]
+~~~
+
+これで、JSONファイル中のすべてのデータが正しく読み込まれます。
+
+### テーブル中のデータを取り出す
+
+以上で、すべてのデータが準備できました。
+
+試しに、`select` コマンドを使って最初の10レコードを取り出してみましょう:
+
+~~~
+$ curl "$endpoint/d/select?table=Store&output_columns=name&limit=10" | jq "."
+[
+  [
+    0,
+    1401362059.7437818,
+    4.935264587402344e-05
+  ],
+  [
+    [
+      [
+        40
+      ],
+      [
+        [
+          "name",
+          "ShortText"
+        ]
+      ],
+      [
+        "1st Avenue & 75th St. - New York NY  (W)"
+      ],
+      [
+        "76th & Second - New York NY  (W)"
+      ],
+      [
+        "Herald Square- Macy's - New York NY"
+      ],
+      [
+        "Macy's 5th Floor - Herald Square - New York NY  (W)"
+      ],
+      [
+        "80th & York - New York NY  (W)"
+      ],
+      [
+        "Columbus @ 67th - New York NY  (W)"
+      ],
+      [
+        "45th & Broadway - New York NY  (W)"
+      ],
+      [
+        "Marriott Marquis - Lobby - New York NY"
+      ],
+      [
+        "Second @ 81st - New York NY  (W)"
+      ],
+      [
+        "52nd & Seventh - New York NY  (W)"
+      ]
+    ]
+  ]
+]
+~~~
+
+もちろん、`query` オプションを使って検索条件を指定する事もできます:
+
+~~~
+$ curl "$endpoint/d/select?table=Store&query=Columbus&match_columns=name&output_columns=name&limit=10" | jq "."
+[
+  [
+    0,
+    1398670157.661574,
+    0.0012705326080322266
+  ],
+  [
+    [
+      [
+        2
+      ],
+      [
+        [
+          "_key",
+          "ShortText"
+        ]
+      ],
+      [
+        "Columbus @ 67th - New York NY  (W)"
+      ],
+      [
+        "2 Columbus Ave. - New York NY  (W)"
+      ]
+    ]
+  ]
+]
+$ curl "$endpoint/d/select?table=Store&filter=name@'Ave'&output_columns=name&limit=10" | jq "."
+[
+  [
+    0,
+    1398670586.193325,
+    0.0003848075866699219
+  ],
+  [
+    [
+      [
+        3
+      ],
+      [
+        [
+          "_key",
+          "ShortText"
+        ]
+      ],
+      [
+        "2nd Ave. & 9th Street - New York NY"
+      ],
+      [
+        "84th & Third Ave - New York NY  (W)"
+      ],
+      [
+        "2 Columbus Ave. - New York NY  (W)"
+      ]
+    ]
+  ]
+]
+~~~
+
+## まとめ
+
+このチュートリアルでは、[Ubuntu Linux][Ubuntu]または[CentOS][]のコンピュータを使って[Droonga][]クラスタを構築しました。
+また、[Groonga][]サーバ互換のシステムとしてデータを読み込ませたり取り出したりすることにも成功しました。
+
+現在の所、DroongaはGroonga互換のコマンドのうちいくつかの限定的な機能にのみ対応しています。
+詳細は[コマンドリファレンス][command reference]を参照して下さい。
+
+続いて、[Droongaクラスタのデータをバックアップしたり復元したりする手順](../dump-restore/)を学びましょう。
+
+  [Ubuntu]: http://www.ubuntu.com/
+  [CentOS]: https://www.centos.org/
+  [Droonga]: https://droonga.org/
+  [Groonga]: http://groonga.org/
+  [command reference]: ../../reference/commands/

  Added: ja/tutorial/1.1.0/index.md (+31 -0) 100644
===================================================================
--- /dev/null
+++ ja/tutorial/1.1.0/index.md    2014-11-30 23:20:40 +0900 (09836b8)
@@ -0,0 +1,31 @@
+---
+title: Droonga チュートリアル
+layout: ja
+---
+
+{% comment %}
+##############################################
+  THIS FILE IS AUTOMATICALLY GENERATED FROM
+  "_po/ja/tutorial/1.1.0/index.po"
+  DO NOT EDIT THIS FILE MANUALLY!
+##############################################
+{% endcomment %}
+
+
+## 初心者とGroonga利用者向け
+
+ * [使ってみる/Groongaからの移行手順](groonga/)
+   * [実験用の仮想マシンを用意する手順](virtual-machines-for-experiments/)
+ * [データベースのバックアップと復元](dump-restore/)
+ * [既存クラスタへのreplicaの追加](add-replica/)
+ * [DroongaとGroongaのベンチマークの取り方](benchmark/)
+
+## 低レイヤのアプリケーション開発者向け
+
+ * [低レイヤのコマンドの基本的な使い方](basic/)
+
+## プラグイン開発者向け
+
+ * [プラグイン開発のチュートリアル](plugin-development/)
+
+

  Added: ja/tutorial/1.1.0/plugin-development/adapter/index.md (+701 -0) 100644
===================================================================
--- /dev/null
+++ ja/tutorial/1.1.0/plugin-development/adapter/index.md    2014-11-30 23:20:40 +0900 (4f79aeb)
@@ -0,0 +1,701 @@
+---
+title: "プラグイン: リクエストとレスポンスを加工し、既存のコマンドに基づいた新しいコマンドを作成する"
+layout: ja
+---
+
+{% comment %}
+##############################################
+  THIS FILE IS AUTOMATICALLY GENERATED FROM
+  "_po/ja/tutorial/1.1.0/plugin-development/adapter/index.po"
+  DO NOT EDIT THIS FILE MANUALLY!
+##############################################
+{% endcomment %}
+
+
+* TOC
+{:toc}
+
+## チュートリアルのゴール
+
+Droongaプラグインを自分で開発するための手順を身につけましょう。
+
+このページでは、Droongaプラグインによる「加工」(adaption)に焦点を当てます。
+最後には、小さな練習用のプラグインを開発して、既存の`search`コマンドに基づく新しいコマンド`storeSearch`を開発することになります。
+
+## 前提条件
+
+* [基本的な使い方のチュートリアル][basic tutorial] を完了している必要があります。
+
+
+## 入力メッセージの加工
+
+まず`sample-logger`という簡単なロガープラグインを使って、適合フェーズに作用するプラグインを作りながら、基礎を学びましょう。
+
+外部のシステムからDroonga Engineにやってくるリクエストを加工する必要がある場合があります。このようなときに、プラグインを利用できます。
+
+このセクションでは、どのようにして*前適合フェーズ*のプラグインを作るのかを見てみていきます。
+
+### ディレクトリの構造
+
+[基本のチュートリアル][basic tutorial]で作成したシステムに対してプラグインを追加すると仮定します。
+先のチュートリアルでは、Droongaエンジンは `engine` ディレクトリ内に置かれていました。
+
+プラグインは、適切な位置のディレクトリに置かれる必要があります。ディレクトリを作成しましょう:
+
+~~~
+# cd engine
+# mkdir -p lib/droonga/plugins
+~~~
+
+ディレクトリを作成した後は、ディレクトリ構造は以下のようになります:
+
+~~~
+engine
+├── catalog.json
+├── fluentd.conf
+└── lib
+    └── droonga
+        └── plugins
+~~~
+
+
+### プラグインの作成
+
+プラグイン用のコードは、*プラグイン自身の名前と同じ名前*のファイルに書く必要があります。
+これから作るプラグインの名前は`sample-logger`なので、コードは`droonga/plugins`ディレクトリ内の`sample-logger.rb`の中に書いていくことになります。
+
+lib/droonga/plugins/sample-logger.rb:
+
+~~~ruby
+require "droonga/plugin"
+
+module Droonga
+  module Plugins
+    module SampleLoggerPlugin
+      extend Plugin
+      register("sample-logger")
+
+      class Adapter < Droonga::Adapter
+        # メッセージを加工するためのコードをここに書きます。
+      end
+    end
+  end
+end
+~~~
+
+このプラグインは、Droonga Engineに自分自身を登録する以外の事は何もしません。
+
+ * `sample-logger`は、このプラグイン自身の名前です。これは`catalog.json`の中で、プラグインを有効化するために使う事になります。
+ * 上記の例のように、プラグインはモジュールとして定義する必要があります。
+ * 前適合フェーズでの振る舞いは、*アダプター*と呼ばれるクラスとして定義します。
+   アダプタークラスは必ず、プラグインのモジュールの名前空間の配下で、`Droonga::Adapter`のサブクラスとして定義する必要があります。
+
+
+### `catalog.json`でプラグインを有効化する
+
+プラグインを有効化するには、`catalog.json`を更新する必要があります。
+プラグインの名前`"sample-logger"`を、データセットの配下の`"plugins"`のリストに挿入します。例:
+
+catalog.json:
+
+~~~
+(snip)
+      "datasets": {
+        "Starbucks": {
+          (snip)
+          "plugins": ["sample-logger", "groonga", "crud", "search", "dump", "status"],
+(snip)
+~~~
+
+注意:`"sample-logger"`は`"search"`よりも前に置く必要があります。これは、`sample-logger`プラグインが`search`に依存しているからです。Droonga Engineは前適合フェーズにおいて、プラグインを`catalog.json`で定義された順に適用しますので、プラグイン同士の依存関係は(今のところは)自分で解決しなくてはなりません。
+
+### 実行と動作を確認する
+
+Droongaを起動しましょう。
+Rubyがあなたの書いたプラグインのコード群を見つけられるように、`RUBYLIB`環境変数に`./lib`を加えることに注意して下さい。
+
+~~~
+# kill $(cat fluentd.pid)
+# RUBYLIB=./lib fluentd --config fluentd.conf --log fluentd.log --daemon fluentd.pid
+~~~
+
+そうしたら、Engineが正しく動作しているかを確かめます。
+まず、以下のようなJSON形式のリクエストを作成します。
+
+search-columbus.json:
+
+~~~json
+{
+  "dataset" : "Starbucks",
+  "type"    : "search",
+  "body"    : {
+    "queries" : {
+      "stores" : {
+        "source"    : "Store",
+        "condition" : {
+          "query"   : "Columbus",
+          "matchTo" : "_key"
+        },
+        "output" : {
+          "elements"   : [
+            "startTime",
+            "elapsedTime",
+            "count",
+            "attributes",
+            "records"
+          ],
+          "attributes" : ["_key"],
+          "limit"      : -1
+        }
+      }
+    }
+  }
+}
+~~~
+
+これは[基本のチュートリアル](basic tutorial)において"Columbus"を検索する例に対応しています。
+Protocol Adapterへのリクエストは`"body"`要素の中に置かれていることに注意して下さい。
+
+`droonga-request`コマンドを使ってリクエストをDroonga Engineに送信します:
+
+~~~
+# droonga-request --tag starbucks search-columbus.json
+Elapsed time: 0.021544
+[
+  "droonga.message",
+  1392617533,
+  {
+    "inReplyTo": "1392617533.9644868",
+    "statusCode": 200,
+    "type": "search.result",
+    "body": {
+      "stores": {
+        "count": 2,
+        "records": [
+          [
+            "Columbus @ 67th - New York NY  (W)"
+          ],
+          [
+            "2 Columbus Ave. - New York NY  (W)"
+          ]
+        ]
+      }
+    }
+  }
+]
+~~~
+
+これが検索結果です。
+
+
+### プラグインを動作させる: ログをとる
+
+ここまでで作成したプラグインは、何もしない物でした。それでは、このプラグインを何か面白いことをする物にしましょう。
+
+まず最初に、`search`のリクエストを捕まえてログ出力してみます。プラグインを以下のように更新して下さい:
+
+lib/droonga/plugins/sample-logger.rb:
+
+~~~ruby
+(snip)
+    module SampleLoggerPlugin
+      extend Plugin
+      register("sample-logger")
+
+      class Adapter < Droonga::Adapter
+        input_message.pattern = ["type", :equal, "search"]
+
+        def adapt_input(input_message)
+          logger.info("SampleLoggerPlugin::Adapter", :message => input_message)
+        end
+      end
+    end
+(snip)
+~~~
+
+`input_message.pattern`で始まる行は、設定です。
+この例では、プラグインを`"type":"search"`という情報を持つすべての入力メッセージに対して働くように定義しています。.
+詳しくは[リファレンスマニュアルの設定のセクション](../../../reference/plugin/adapter/#config)を参照して下さい。
+
+`adapt_input`メソッドは、パターンに当てはまるすべての入力メッセージに対して毎回呼ばれます。
+引数の`input_message`は、入力メッセージをラップした物です。
+
+fluentdを再起動します:
+
+~~~
+# kill $(cat fluentd.pid)
+# RUBYLIB=./lib fluentd --config fluentd.conf --log fluentd.log --daemon fluentd.pid
+~~~
+
+前のセクションと同じリクエストを送信します:
+
+~~~
+# droonga-request --tag starbucks search-columbus.json
+Elapsed time: 0.014714
+[
+  "droonga.message",
+  1392618037,
+  {
+    "inReplyTo": "1392618037.935901",
+    "statusCode": 200,
+    "type": "search.result",
+    "body": {
+      "stores": {
+        "count": 2,
+        "records": [
+          [
+            "Columbus @ 67th - New York NY  (W)"
+          ],
+          [
+            "2 Columbus Ave. - New York NY  (W)"
+          ]
+        ]
+      }
+    }
+  }
+]
+~~~
+
+すると、fluentdのログファイルである`fluentd.log`に以下のようなログが出力される事を確認できるでしょう。
+
+~~~
+2014-02-17 15:20:37 +0900 [info]: SampleLoggerPlugin::Adapter message=#<Droonga::InputMessage:0x007f8ae3e1dd98 @raw_message={"dataset"=>"Starbucks", "type"=>"search", "body"=>{"queries"=>{"stores"=>{"source"=>"Store", "condition"=>{"query"=>"Columbus", "matchTo"=>"_key"}, "output"=>{"elements"=>["startTime", "elapsedTime", "count", "attributes", "records"], "attributes"=>["_key"], "limit"=>-1}}}}, "replyTo"=>{"type"=>"search.result", "to"=>"127.0.0.1:64591/droonga"}, "id"=>"1392618037.935901", "date"=>"2014-02-17 15:20:37 +0900", "appliedAdapters"=>[]}>
+~~~
+
+このログは、メッセージが`SampleLoggerPlugin::Adapter`によって受信されて、Droongaに渡されたことを示しています。実際のデータ処理の前に、この時点でメッセージを加工することができます。
+
+### プラグインでメッセージを加工する
+
+レスポンスで返されるレコードの数を常に1つだけに制限したい場合、すべてのリクエストについて`limit`を`1`に指定する必要があります。プラグインを以下のように変更してみましょう:
+
+lib/droonga/plugins/sample-logger.rb:
+
+~~~ruby
+(snip)
+        def adapt_input(input_message)
+          logger.info("SampleLoggerPlugin::Adapter", :message => input_message)
+          input_message.body["queries"]["stores"]["output"]["limit"] = 1
+        end
+(snip)
+~~~
+
+上の例のように、プラグインは`adapt_input`メソッドの引数として渡される`input_message`を通じて入力メッセージの内容を加工することができます。
+詳細は[当該メッセージの実装であるクラスのリファレンスマニュアル](../../../reference/plugin/adapter/#classes-Droonga-InputMessage)を参照して下さい。
+
+fluentdを再起動します:
+
+~~~
+# kill $(cat fluentd.pid)
+# RUBYLIB=./lib fluentd --config fluentd.conf --log fluentd.log --daemon fluentd.pid
+~~~
+
+再起動後、レスポンスは`records`の値としてレコードを常に(最大で)1つだけ含むようになります。
+
+先の場合と同じリクエストを投げてみましょう:
+
+~~~
+# droonga-request --tag starbucks search-columbus.json
+Elapsed time: 0.017343
+[
+  "droonga.message",
+  1392618279,
+  {
+    "inReplyTo": "1392618279.0578449",
+    "statusCode": 200,
+    "type": "search.result",
+    "body": {
+      "stores": {
+        "count": 2,
+        "records": [
+          [
+            "Columbus @ 67th - New York NY  (W)"
+          ]
+        ]
+      }
+    }
+  }
+]
+~~~
+
+`count`が依然として`2`であることに注意して下さい。これは、`limit`が`count`には影響を与えないという`search`コマンド自体の仕様によるものです。`search`コマンドの詳細については[`search`コマンドのリファレンスマニュアル][search]を参照して下さい。
+
+すると、fluentdのログファイルである`fluentd.log`に以下のようなログが出力される事を確認できるでしょう。
+
+~~~
+2014-02-17 15:24:39 +0900 [info]: SampleLoggerPlugin::Adapter message=#<Droonga::InputMessage:0x007f956685c908 @raw_message={"dataset"=>"Starbucks", "type"=>"search", "body"=>{"queries"=>{"stores"=>{"source"=>"Store", "condition"=>{"query"=>"Columbus", "matchTo"=>"_key"}, "output"=>{"elements"=>["startTime", "elapsedTime", "count", "attributes", "records"], "attributes"=>["_key"], "limit"=>-1}}}}, "replyTo"=>{"type"=>"search.result", "to"=>"127.0.0.1:64616/droonga"}, "id"=>"1392618279.0578449", "date"=>"2014-02-17 15:24:39 +0900", "appliedAdapters"=>[]}>
+~~~
+
+
+## 出力メッセージの加工
+
+Droonga Engineからの出力メッセージ(例えば検索結果など)を加工したい場合は、別のメソッドを定義することでそれを実現できます。
+このセクションでは、出力メッセージを加工するメソッドを定義してみましょう。
+
+
+### 出力のメッセージを加工するメソッドを追加する
+
+`search`コマンドの結果のログを取ってみましょう。
+出力メッセージを処理するために、`adapt_output`メソッドを定義します。
+説明を簡単にするために、ここでは`adapt_input`メソッドの定義を一旦削除します。
+
+lib/droonga/plugins/sample-logger.rb:
+
+~~~ruby
+(snip)
+    module SampleLoggerPlugin
+      extend Plugin
+      register("sample-logger")
+
+      class Adapter < Droonga::Adapter
+        input_message.pattern = ["type", :equal, "search"]
+
+        def adapt_output(output_message)
+          logger.info("SampleLoggerPlugin::Adapter", :message => output_message)
+        end
+      end
+    end
+(snip)
+~~~
+
+`adapt_output`メソッドは、そのプラグイン自身によって捕捉された入力メッセージに起因して送出された出力メッセージに対してのみ呼ばれます(マッチングパターンのみが指定されていて、`adapt_input`メソッドが定義されていない場合であっても)。
+詳細は[プラグイン開発APIのリファレンスマニュアル](../../../reference/plugin/adapter/)を参照して下さい。
+
+### 実行する
+
+fluentdを再起動しましょう:
+
+~~~
+# kill $(cat fluentd.pid)
+# RUBYLIB=./lib fluentd --config fluentd.conf --log fluentd.log --daemon fluentd.pid
+~~~
+
+次に、検索リクエストを送ります(前のセクションと同じJSONをリクエストとして使います):
+
+~~~
+# droonga-request --tag starbucks search-columbus.json
+Elapsed time: 0.015491
+[
+  "droonga.message",
+  1392619269,
+  {
+    "inReplyTo": "1392619269.184789",
+    "statusCode": 200,
+    "type": "search.result",
+    "body": {
+      "stores": {
+        "count": 2,
+        "records": [
+          [
+            "Columbus @ 67th - New York NY  (W)"
+          ],
+          [
+            "2 Columbus Ave. - New York NY  (W)"
+          ]
+        ]
+      }
+    }
+  }
+]
+~~~
+
+fluentdのログは以下のようになっているはずです:
+
+~~~
+2014-02-17 15:41:09 +0900 [info]: SampleLoggerPlugin::Adapter message=#<Droonga::OutputMessage:0x007fddcad4d5a0 @raw_message={"dataset"=>"Starbucks", "type"=>"dispatcher", "body"=>{"stores"=>{"count"=>2, "records"=>[["Columbus @ 67th - New York NY  (W)"], ["2 Columbus Ave. - New York NY  (W)"]]}}, "replyTo"=>{"type"=>"search.result", "to"=>"127.0.0.1:64724/droonga"}, "id"=>"1392619269.184789", "date"=>"2014-02-17 15:41:09 +0900", "appliedAdapters"=>["Droonga::Plugins::SampleLoggerPlugin::Adapter", "Droonga::Plugins::Error::Adapter"]}>
+~~~
+
+ここには、`search`の結果が`adapt_output`メソッドに渡された事(そしてログ出力された事)が示されています。
+
+
+### 結果を適合フェーズで加工する
+
+*後適合フェーズ*において、結果を加工してみましょう。
+例えば、リクエストに対する処理が完了した時刻を示す`completedAt`というアトリビュートを加えるとします。
+プラグインを以下のように更新して下さい:
+
+lib/droonga/plugins/sample-logger.rb:
+
+~~~ruby
+(snip)
+        def adapt_output(output_message)
+          logger.info("SampleLoggerPlugin::Adapter", :message => output_message)
+          output_message.body["stores"]["completedAt"] = Time.now
+        end
+(snip)
+~~~
+
+上の例のように、出力メッセージは`adapt_output`メソッドの引数として渡される`output_message`を通じて加工することができます。
+詳細は[当該メッセージの実装のクラスのリファレンスマニュアル](../../../reference/plugin/adapter/#classes-Droonga-OutputMessage)を参照して下さい。
+
+fluentdを再起動します:
+
+~~~
+# kill $(cat fluentd.pid)
+# RUBYLIB=./lib fluentd --config fluentd.conf --log fluentd.log --daemon fluentd.pid
+~~~
+
+同じ検索リクエストを送ってみましょう:
+
+~~~
+# droonga-request --tag starbucks search-columbus.json
+Elapsed time: 0.013983
+[
+  "droonga.message",
+  1392619528,
+  {
+    "inReplyTo": "1392619528.235121",
+    "statusCode": 200,
+    "type": "search.result",
+    "body": {
+      "stores": {
+        "count": 2,
+        "records": [
+          [
+            "Columbus @ 67th - New York NY  (W)"
+          ],
+          [
+            "2 Columbus Ave. - New York NY  (W)"
+          ]
+        ],
+        "completedAt": "2014-02-17T06:45:28.247669Z"
+      }
+    }
+  }
+]
+~~~
+
+リクエストの処理が完了した時刻を含むアトリビュートである`completedAt`が追加された事を確認できました。
+`fluentd.log`には以下のように出力されているはずです:
+
+~~~
+2014-02-17 15:45:28 +0900 [info]: SampleLoggerPlugin::Adapter message=#<Droonga::OutputMessage:0x007fd384f3ab60 @raw_message={"dataset"=>"Starbucks", "type"=>"dispatcher", "body"=>{"stores"=>{"count"=>2, "records"=>[["Columbus @ 67th - New York NY  (W)"], ["2 Columbus Ave. - New York NY  (W)"]]}}, "replyTo"=>{"type"=>"search.result", "to"=>"127.0.0.1:64849/droonga"}, "id"=>"1392619528.235121", "date"=>"2014-02-17 15:45:28 +0900", "appliedAdapters"=>["Droonga::Plugins::SampleLoggerPlugin::Adapter", "Droonga::Plugins::Error::Adapter"]}>
+~~~
+
+
+## 入出力メッセージの加工
+
+ここまでで、前適合フェーズと後適合フェーズで動作するプラグインの基本を学びました。
+それでは、より実践的なプラグインを開発してみることにしましょう。
+
+Droongaの`search`コマンドを見た時、目的に対していささか柔軟すぎるという印象を持ったことと思います
+そこで、ここではアプリケーション固有の単純なインターフェースを持つコマンドとして、`search`コマンドをラップする`storeSearch`というコマンドを、`store-search`というプラグインで追加していくことにします。
+
+### シンプルなリクエストを受け取る
+
+まず最初に、`store-search`プラグインを作ります。
+思い出して下さい、プラグインを実装するコードは、これから作ろうとしているプラグインと同じ名前のファイルに書かなくてはなりませんでしたよね。
+ですので、実装を書くファイルは`droonga/plugins`ディレクトリに置かれた`store-search.rb`となります。`StoreSearchPlugin`を以下のように定義しましょう:
+
+lib/droonga/plugins/store-search.rb:
+
+~~~ruby
+require "droonga/plugin"
+
+module Droonga
+  module Plugins
+    module StoreSearchPlugin
+      extend Plugin
+      register("store-search")
+
+      class Adapter < Droonga::Adapter
+        input_message.pattern = ["type", :equal, "storeSearch"]
+
+        def adapt_input(input_message)
+          logger.info("StoreSearchPlugin::Adapter", :message => input_message)
+
+          query = input_message.body["query"]
+          logger.info("storeSearch", :query => query)
+
+          body = {
+            "queries" => {
+              "stores" => {
+                "source"    => "Store",
+                "condition" => {
+                  "query"   => query,
+                  "matchTo" => "_key",
+                },
+                "output"    => {
+                  "elements"   => [
+                    "startTime",
+                    "elapsedTime",
+                    "count",
+                    "attributes",
+                    "records",
+                  ],
+                  "attributes" => [
+                    "_key",
+                  ],
+                  "limit"      => -1,
+                }
+              }
+            }
+          }
+
+          input_message.type = "search"
+          input_message.body = body
+        end
+      end
+    end
+  end
+end
+~~~
+
+次に、プラグインを有効化するために`catalog.json`を更新します。
+先程作成した`sample-logger`は削除しておきます。
+
+catalog.json:
+
+~~~
+(snip)
+      "datasets": {
+        "Starbucks": {
+          (snip)
+          "plugins": ["store-search", "groonga", "crud", "search", "dump", "status"],
+(snip)
+~~~
+
+思い出して下さい、`"store-search"`は`"search"`に依存しているので、`"search"`よりも前に置く必要があります。
+
+fluentdを再起動します:
+
+~~~
+# kill $(cat fluentd.pid)
+# RUBYLIB=./lib fluentd --config fluentd.conf --log fluentd.log --daemon fluentd.pid
+~~~
+
+これで、以下のようなリクエストで新しいコマンドを使えるようになりました:
+
+store-search-columbus.json:
+
+~~~json
+{
+  "dataset" : "Starbucks",
+  "type"    : "storeSearch",
+  "body"    : {
+    "query" : "Columbus"
+  }
+}
+~~~
+
+リクエストを発行するために、以下のようにコマンドを実行しましょう:
+
+~~~
+# droonga-request --tag starbucks store-search-columbus.json
+Elapsed time: 0.01494
+[
+  "droonga.message",
+  1392621168,
+  {
+    "inReplyTo": "1392621168.0119512",
+    "statusCode": 200,
+    "type": "storeSearch.result",
+    "body": {
+      "stores": {
+        "count": 2,
+        "records": [
+          [
+            "Columbus @ 67th - New York NY  (W)"
+          ],
+          [
+            "2 Columbus Ave. - New York NY  (W)"
+          ]
+        ]
+      }
+    }
+  }
+]
+~~~
+
+この時、`fluentd.log`には以下のようなログが出力されているはずです:
+
+~~~
+2014-02-17 16:12:48 +0900 [info]: StoreSearchPlugin::Adapter message=#<Droonga::InputMessage:0x007fe4791d3958 @raw_message={"dataset"=>"Starbucks", "type"=>"storeSearch", "body"=>{"query"=>"Columbus"}, "replyTo"=>{"type"=>"storeSearch.result", "to"=>"127.0.0.1:49934/droonga"}, "id"=>"1392621168.0119512", "date"=>"2014-02-17 16:12:48 +0900", "appliedAdapters"=>[]}>
+2014-02-17 16:12:48 +0900 [info]: storeSearch query="Columbus"
+~~~
+
+以上の手順で、単純なリクエストによって店舗の検索を行えるようになりました。
+
+注意:レスポンスのメッセージの`"type"`の値が`"search.result"`から`"storeSearch.result"`に変わっていることに注目して下さい。これは、このレスポンスが、`type`が`"storeSearch"`であるリクエストに起因して発生した物であるために、Droonga Engineによって自動的に`"(入力メッセージのtype).result"`という`type`が設定されたからです。言い換えると、出力メッセージの`type`は、`adapt_input`での`input_message.type = "search"`のような方法でわざわざ自分で設定し直す必要はありません。
+
+### シンプルなレスポンスを返す
+
+次に、結果をより単純な形で、単に店舗の名前の配列だけを返すだけという物にしてみましょう。
+
+`adapt_output`を以下のように定義して下さい。
+
+lib/droonga/plugins/store-search.rb:
+
+~~~ruby
+(snip)
+    module StoreSearchPlugin
+      extend Plugin
+      register("store-search")
+
+      class Adapter < Droonga::Adapter
+        (snip)
+
+        def adapt_output(output_message)
+          logger.info("StoreSearchPlugin::Adapter", :message => output_message)
+
+          records = output_message.body["stores"]["records"]
+          simplified_results = records.flatten
+
+          output_message.body = simplified_results
+        end
+      end
+    end
+(snip)
+~~~
+
+`adapt_output`メソッドは、そのプラグインによって捕捉された入力メッセージに対応する出力メッセージのみを受け取ります。
+
+fluentdを再起動します:
+
+~~~
+# kill $(cat fluentd.pid)
+# RUBYLIB=./lib fluentd --config fluentd.conf --log fluentd.log --daemon fluentd.pid
+~~~
+
+リクエストを送ってみましょう:
+
+~~~
+# droonga-request --tag starbucks store-search-columbus.json
+Elapsed time: 0.014859
+[
+  "droonga.message",
+  1392621288,
+  {
+    "inReplyTo": "1392621288.158763",
+    "statusCode": 200,
+    "type": "storeSearch.result",
+    "body": [
+      "Columbus @ 67th - New York NY  (W)",
+      "2 Columbus Ave. - New York NY  (W)"
+    ]
+  }
+]
+~~~
+
+`fluentd.log`には以下のようなログが出力されているはずです:
+
+~~~
+2014-02-17 16:14:48 +0900 [info]: StoreSearchPlugin::Adapter message=#<Droonga::InputMessage:0x007ffb8ada9d68 @raw_message={"dataset"=>"Starbucks", "type"=>"storeSearch", "body"=>{"query"=>"Columbus"}, "replyTo"=>{"type"=>"storeSearch.result", "to"=>"127.0.0.1:49960/droonga"}, "id"=>"1392621288.158763", "date"=>"2014-02-17 16:14:48 +0900", "appliedAdapters"=>[]}>
+2014-02-17 16:14:48 +0900 [info]: storeSearch query="Columbus"
+2014-02-17 16:14:48 +0900 [info]: StoreSearchPlugin::Adapter message=#<Droonga::OutputMessage:0x007ffb8ad78e48 @raw_message={"dataset"=>"Starbucks", "type"=>"dispatcher", "body"=>{"stores"=>{"count"=>2, "records"=>[["Columbus @ 67th - New York NY  (W)"], ["2 Columbus Ave. - New York NY  (W)"]]}}, "replyTo"=>{"type"=>"storeSearch.result", "to"=>"127.0.0.1:49960/droonga"}, "id"=>"1392621288.158763", "date"=>"2014-02-17 16:14:48 +0900", "appliedAdapters"=>["Droonga::Plugins::StoreSearchPlugin::Adapter", "Droonga::Plugins::Error::Adapter"], "originalTypes"=>["storeSearch"]}>
+~~~
+
+このように、単純化されたレスポンスを受け取ることができました。
+
+ここで解説したように、アダプターはアプリケーション固有の検索機能を実装するために利用できます。
+
+## まとめ
+
+既存のコマンドと独自のアダプターのみを使って新しいコマンドを追加する方法について学びました。
+その過程で、入力メッセージと出力メッセージの両方について、どのように受け取り加工するのかについても学びました。
+
+詳細は[リファレンスマニュアル](../../../reference/plugin/adapter/)を参照して下さい。
+
+
+  [basic tutorial]: ../../basic/
+  [overview]: ../../../overview/
+  [search]: ../../../reference/commands/select/

  Added: ja/tutorial/1.1.0/plugin-development/handler/index.md (+542 -0) 100644
===================================================================
--- /dev/null
+++ ja/tutorial/1.1.0/plugin-development/handler/index.md    2014-11-30 23:20:40 +0900 (6ca682a)
@@ -0,0 +1,542 @@
+---
+title: "プラグイン: 全てのパーティション上でリクエストを処理し、ストレージを操作する新たなコマンドを追加する"
+layout: ja
+---
+
+{% comment %}
+##############################################
+  THIS FILE IS AUTOMATICALLY GENERATED FROM
+  "_po/ja/tutorial/1.1.0/plugin-development/handler/index.po"
+  DO NOT EDIT THIS FILE MANUALLY!
+##############################################
+{% endcomment %}
+
+
+* TOC
+{:toc}
+
+## チュートリアルのゴール
+
+このチュートリアルでは、各ボリュームでのhadling phaseにおいて分散された処理を実行するプラグインを開発するための方法を学びます。
+言い換えると、このチュートリアルでは *新しいコマンドをDroonga Engineに加える方法* を説明します。
+
+## 前提条件
+
+* [adaption phaseのチュートリアル][adapter]を完了していること。
+
+## リクエストのハンドリング
+
+適合フェーズからリクエストが転送されてくると、Droonga Engineは*処理フェーズ(processing phase)*に入ります。
+
+処理フェーズでは、Droonga Engineはリクエストを「ステップ」ごとに段階的に処理します。
+1つの *ステップ* は、*立案フェーズ*、*配布フェーズ*、*ハンドリング・フェーズ*、そして *集約フェーズ* という4つのフェーズから成り立っています。
+
+ * *立案フェーズ* では、Droonga Engineはリクエストを処理するための複数のより小さなステップを生成します。
+   単純なコマンドでは、このフェーズのためのコードを書く必要はありません。その場合には、リクエストを処理するためのステップが1つだけ存在するということになります。
+ * *配布フェーズ* では、Droonga Engineは、リクエストを処理するためのタスクを表すメッセージを複数のボリュームに配布します。
+   (この処理は完全にDroonga Engine自身によって行われるため、このフェーズはプラグインでの拡張はできません。)
+ * *ハンドリング・フェーズ*では、*各single volumeが、配布された単一のタスクメッセージを入力として処理して、その結果を返します*。
+   ストレージへの読み書きが実際に発生するのは、この時になります。
+   実際に、いくつかのコマンド(例えば `search`、`add`、`create_table` など)はこのタイミングでストレージの読み書きを行っています。
+ * *集約フェーズ* では、Droonga Engineが各ボリュームから返された結果を集約して、単一の結果に統合します。
+   Droonga Engineは汎用の便利なcollectorをいくつか含んでいるため、多くの場合において、あなたはこのフェーズのためのコードを書く必要はありません。
+
+すべてのステップの処理が終了すると、Droonga Engineは結果を後適合フェーズへと転送します。
+
+ハンドリング・フェーズでの操作を定義するクラスは、*ハンドラー*と呼ばれます。
+簡単に言うと、新しいハンドラーを追加するということは、新しいコマンドを追加するということを意味します。
+
+
+
+
+
+
+## 読み取り専用のコマンド `countRecords` を設計する
+
+このチュートリアルでは、新しい独自のコマンド `countRecords` を実装することにします。
+まず、コマンドの仕様を設計しましょう。
+
+このコマンドは、個々のsingle volumeにおける指定テーブルの全レコードの数を報告します。
+これは、クラスタ内でどのようにレコードが分散されているかを調べる助けになるでしょう。
+このコマンドはデータベースの内容を何も変更しないので、これは*読み取り専用のコマンド*と言うことができます。
+
+リクエストは、以下のようにテーブル名を必ず1つ含まなくてはなりません
+
+~~~json
+{
+  "dataset" : "Starbucks",
+  "type"    : "countRecords",
+  "body"    : {
+    "table": "Store"
+  }
+}
+~~~
+
+上記のような内容のJSON形式のファイル `count-records.json` を作成します。
+以降の検証では、このファイルを使い続けていきましょう。
+
+レスポンスは、各single volumeごとのそのテーブルにあるレコードの数を含んでいなくてはなりません。
+これは以下のように、配列として表現できます:
+
+~~~json
+{
+  "inReplyTo": "(message id)",
+  "statusCode": 200,
+  "type": "countRecords.result",
+  "body": [10, 10]
+}
+~~~
+
+ボリュームが2つある場合、20個のレコードが均等に保持されているはずなので、配列は上記のように2つの要素を持つことになるでしょう。
+この例は、各ボリュームがレコードを10個ずつ保持している事を示しています。
+
+それでは、ここまでで述べたような形式のリクエストを受け付けて上記のようなレスポンスを返す、というプラグインを作っていきましょう。
+
+
+### ディレクトリ構成
+
+プラグインのディレクトリ構成は、[適合フェーズ用のプラグインのチュートリアル][adapter]での説明と同じ様式に則ります。
+`count-records.rb` というファイルとして、`count-records` プラグインを作りましょう。ディレクトリツリーは以下のようになります:
+
+~~~
+lib
+└── droonga
+    └── plugins
+            └── count-records.rb
+~~~
+
+次に、以下のようにしてプラグインの骨組みを作ります:
+
+lib/droonga/plugins/count-records.rb:
+
+~~~ruby
+require "droonga/plugin"
+
+module Droonga
+  module Plugins
+    module CountRecordsPlugin
+      extend Plugin
+      register("count-records")
+    end
+  end
+end
+~~~
+
+### コマンドのための「ステップ」を定義する
+
+以下のようにして、プラグインの中で新しいコマンド `countRecords` のための「ステップ」を定義します:
+
+lib/droonga/plugins/count-records.rb:
+
+~~~ruby
+require "droonga/plugin"
+
+module Droonga
+  module Plugins
+    module CountRecordsPlugin
+      extend Plugin
+      register("count-records")
+
+      define_single_step do |step|
+        step.name = "countRecords"
+      end
+    end
+  end
+end
+~~~
+
+`step.name` の値は、コマンド自身の名前と同じです。
+今のところは、コマンドの名前を定義しただけです。
+それ以上のことはしていません。
+
+### ハンドリングの仕方を定義する
+
+このコマンドはハンドラーを持っていないため、まだ何も処理が行われません。
+それではコマンドの挙動を定義しましょう。
+
+lib/droonga/plugins/count-records.rb:
+
+~~~ruby
+require "droonga/plugin"
+
+module Droonga
+  module Plugins
+    module CountRecordsPlugin
+      extend Plugin
+      register("count-records")
+
+      define_single_step do |step|
+        step.name = "countRecords"
+        step.handler = :Handler
+      end
+
+      class Handler < Droonga::Handler
+        def handle(message)
+          [0]
+        end
+      end
+    end
+  end
+end
+~~~
+
+`Handler` というクラスは、新しいコマンドのためのハンドラークラスです。
+
+ * ハンドラークラスは、組み込みのクラス `Droonga::Handler` を継承してなければなりません。
+ * ハンドラークラスは、リクエストをどのように扱うかの処理を実装します。
+   インスタンスメソッド `#handle` が実際にリクエストを処理します。
+
+現時点で、このハンドラーは何も処理を行わず、単に数値1つからなる配列を含む処理結果を返すだけです。
+戻り値はレスポンスのbodyを組み立てるのに使われます。
+
+The handler is bound to the step with the configuration `step.handler`.
+Because we define the class `Handler` after `define_single_step`, we specify the handler class with a symbol `:Handler`.
+If you define the handler class before `define_single_step`, then you can write as `step.handler = Handler` simply.
+Moreover, a class path string like `"OtherPlugin::Handler"` is also available.
+
+Then, we also have to bind a collector to the step, with the configuration `step.collector`.
+
+lib/droonga/plugins/count-records.rb:
+
+~~~ruby
+# (snip)
+      define_single_step do |step|
+        step.name = "countRecords"
+        step.handler = :Handler
+        step.collector = Collectors::Sum
+      end
+# (snip)
+~~~
+
+The `Collectors::Sum` is one of built-in collectors.
+It merges results returned from handler instances for each volume to one result.
+
+
+### `catalog.json`でプラグインを有効化する
+
+Update catalog.json to activate this plugin.
+Add `"count-records"` to `"plugins"`.
+
+~~~
+(snip)
+      "datasets": {
+        "Starbucks": {
+          (snip)
+          "plugins": ["count-records", "groonga", "crud", "search", "dump", "status"],
+(snip)
+~~~
+
+### 実行と動作を確認する
+
+Let's get Droonga started.
+Note that you need to specify ./lib directory in RUBYLIB environment variable in order to make ruby possible to find your plugin.
+
+    # kill $(cat fluentd.pid)
+    # RUBYLIB=./lib fluentd --config fluentd.conf --log fluentd.log --daemon fluentd.pid
+
+Then, send a request message for the `countRecords` command to the Droonga Engine.
+
+~~~
+# droonga-request --tag starbucks count-records.json
+Elapsed time: 0.01494
+[
+  "droonga.message",
+  1392621168,
+  {
+    "inReplyTo": "1392621168.0119512",
+    "statusCode": 200,
+    "type": "countRecords.result",
+    "body": [
+      0,
+      0,
+      0
+    ]
+  }
+]
+~~~
+
+You'll get a response message like above.
+Look at these points:
+
+ * The `type` of the response becomes `countRecords.result`.
+   It is automatically named by the Droonga Engine.
+ * The format of the `body` is same to the returned value of the handler's `handle` method.
+
+There are three elements in the array. Why?
+
+ * Remember that the `Starbucks` dataset was configured with two replicas and three sub volumes for each replica, in the `catalog.json` of [the basic tutorial][basic].
+ * Because it is a read-only command, a request is delivered to only one replica (and it is chosen at random).
+   Then only three single volumes receive the command, so only three results appear, not six.
+   (TODO: I have to add a figure to indicate active nodes: [000, 001, 002, 010, 011, 012] => [000, 001, 002])
+ * The `Collectors::Sum` collects them.
+   Those three results are joined to just one array by the collector.
+
+As the result, just one array with three elements appears in the final response.
+
+### Read-only access to the storage
+
+Now, each instance of the handler class always returns `0` as its result.
+Let's implement codes to count up the number of records from the actual storage.
+
+lib/droonga/plugins/count-records.rb:
+
+~~~ruby
+# (snip)
+      class Handler < Droonga::Handler
+        def handle(message)
+          request = message.request
+          table_name = request["table"]
+          table = @context[table_name]
+          count = table.size
+          [count]
+        end
+      end
+# (snip)
+~~~
+
+Look at the argument of the `handle` method.
+It is different from the one an adapter receives.
+A handler receives a message meaning a distributed task.
+So you have to extract the request message from the distributed task by the code `request = message.request`.
+
+The instance variable `@context` is an instance of `Groonga::Context` for the storage of the corresponding single volume.
+See the [class reference of Rroonga][Groonga::Context].
+You can use any feature of Rroonga via `@context`.
+For now, we simply access to the table itself by its name and read the value of its `size` method - it returns the number of records.
+
+Then, test it.
+Restart the Droonga Engine and send the request again.
+
+~~~
+# kill $(cat fluentd.pid)
+# RUBYLIB=./lib fluentd --config fluentd.conf --log fluentd.log --daemon fluentd.pid
+# droonga-request --tag starbucks count-records.json
+Elapsed time: 0.01494
+[
+  "droonga.message",
+  1392621168,
+  {
+    "inReplyTo": "1392621168.0119512",
+    "statusCode": 200,
+    "type": "countRecords.result",
+    "body": [
+      14,
+      15,
+      11
+    ]
+  }
+]
+~~~
+
+Because there are totally 40 records, they are stored evenly like above.
+
+## Design a read-write command `deleteStores`
+
+Next, let's add another new custom command `deleteStores`.
+
+The command deletes records of the `Store` table, from the storage.
+Because it modifies something in existing storage, it is a *read-write command*.
+
+The request must have the condition to select records to be deleted, like:
+
+~~~json
+{
+  "dataset" : "Starbucks",
+  "type"    : "deleteStores",
+  "body"    : {
+    "keyword": "Broadway"
+  }
+}
+~~~
+
+Any record including the given keyword `"Broadway"` in its `"key"` is deleted from the storage of all volumes.
+
+Create a JSON file `delete-stores-broadway.json` with the content above.
+We'll use it for testing.
+
+The response must have a boolean value to indicate "success" or "fail", like:
+
+~~~json
+{
+  "inReplyTo": "(message id)",
+  "statusCode": 200,
+  "type": "deleteStores.result",
+  "body": true
+}
+~~~
+
+If the request is successfully processed, the `body` becomes `true`. Otherwise `false`.
+The `body` is just one boolean value, because we don't have to receive multiple results from volumes.
+
+
+### ディレクトリの構造
+
+Now let's create the `delete-stores` plugin, as the file `delete-stores.rb`. The directory tree will be:
+
+~~~
+lib
+└── droonga
+    └── plugins
+            └── delete-stores.rb
+~~~
+
+次に、以下のようにしてプラグインの骨組みを作ります:
+
+lib/droonga/plugins/delete-stores.rb:
+
+~~~ruby
+require "droonga/plugin"
+
+module Droonga
+  module Plugins
+    module DeleteStoresPlugin
+      extend Plugin
+      register("delete-stores")
+    end
+  end
+end
+~~~
+
+
+### コマンドのための「ステップ」を定義する
+
+Define a "step" for the new `deleteStores` command, in your plugin. Like:
+
+lib/droonga/plugins/delete-stores.rb:
+
+~~~ruby
+require "droonga/plugin"
+
+module Droonga
+  module Plugins
+    module DeleteStoresPlugin
+      extend Plugin
+      register("delete-stores")
+
+      define_single_step do |step|
+        step.name = "deleteStores"
+        step.write = true
+      end
+    end
+  end
+end
+~~~
+
+Look at a new configuration `step.write`.
+Because this command modifies the storage, we must indicate it clearly.
+
+### ハンドリングの仕方を定義する
+
+Let's define the handler.
+
+lib/droonga/plugins/delete-stores.rb:
+
+~~~ruby
+require "droonga/plugin"
+
+module Droonga
+  module Plugins
+    module DeleteStoresPlugin
+      extend Plugin
+      register("delete-stores")
+
+      define_single_step do |step|
+        step.name = "deleteStores"
+        step.write = true
+        step.handler = :Handler
+        step.collector = Collectors::And
+      end
+
+      class Handler < Droonga::Handler
+        def handle(message)
+          request = message.request
+          keyword = request["keyword"]
+          table = @context["Store"]
+          table.delete do |record|
+            record.key =~ keyword
+          end
+          true
+        end
+      end
+    end
+  end
+end
+~~~
+
+Remember, you have to extract the request message from the received task message.
+
+The handler finds and deletes existing records which have the given keyword in its "key", by the [API of Rroonga][Groonga::Table_delete].
+
+And, the `Collectors::And` is bound to the step by the configuration `step.collector`.
+It is is also one of built-in collectors, and merges boolean values returned from handler instances for each volume, to one boolean value.
+
+### `catalog.json`でプラグインを有効化する
+
+Update catalog.json to activate this plugin.
+Add `"delete-stores"` to `"plugins"`.
+
+~~~
+(snip)
+      "datasets": {
+        "Starbucks": {
+          (snip)
+          "plugins": ["delete-stores", "count-records", "groonga", "crud", "search", "dump", "status"],
+(snip)
+~~~
+
+### 実行と動作を確認する
+
+Restart the Droonga Engine and send the request.
+
+~~~
+# kill $(cat fluentd.pid)
+# RUBYLIB=./lib fluentd --config fluentd.conf --log fluentd.log --daemon fluentd.pid
+# droonga-request --tag starbucks count-records.json
+Elapsed time: 0.01494
+[
+  "droonga.message",
+  1392621168,
+  {
+    "inReplyTo": "1392621168.0119512",
+    "statusCode": 200,
+    "type": "deleteStores.result",
+    "body": true
+  }
+]
+~~~
+
+Because results from volumes are unified to just one boolean value, the response's `body` is a `true`.
+As the verification, send the request of `countRecords` command.
+
+~~~
+# droonga-request --tag starbucks count-records.json
+Elapsed time: 0.01494
+[
+  "droonga.message",
+  1392621168,
+  {
+    "inReplyTo": "1392621168.0119512",
+    "statusCode": 200,
+    "type": "countRecords.result",
+    "body": [
+      7,
+      13,
+      6
+    ]
+  }
+]
+~~~
+
+Note, the number of records are smaller than the previous result.
+This means that four or some records are deleted from each volume.
+
+## まとめ
+
+We have learned how to add a new simple command working around the data.
+In the process, we also have learned how to create plugins working in the handling phrase.
+
+
+  [adapter]: ../adapter
+  [basic]: ../basic
+  [Groonga::Context]: http://ranguba.org/rroonga/en/Groonga/Context.html
+  [Groonga::Table_delete]: http://ranguba.org/rroonga/en/Groonga/Table.html#delete-instance_method

  Added: ja/tutorial/1.1.0/plugin-development/index.md (+92 -0) 100644
===================================================================
--- /dev/null
+++ ja/tutorial/1.1.0/plugin-development/index.md    2014-11-30 23:20:40 +0900 (05f2d09)
@@ -0,0 +1,92 @@
+---
+title: Droongaプラグイン開発チュートリアル
+layout: ja
+---
+
+{% comment %}
+##############################################
+  THIS FILE IS AUTOMATICALLY GENERATED FROM
+  "_po/ja/tutorial/1.1.0/plugin-development/index.po"
+  DO NOT EDIT THIS FILE MANUALLY!
+##############################################
+{% endcomment %}
+
+
+* TOC
+{:toc}
+
+## チュートリアルのゴール
+
+Droongaプラグインの作り方を理解します。
+[基本的な使い方のチュートリアル][basic tutorial]を完了している必要があります。
+
+
+## プラグインとは
+
+プラグインはDroongaの中でもっとも重要なコンセプトの一つです。
+プラグインがDroongaを柔軟なものにしています。
+
+多くの現実的なデータ処理タスクでは、問題に応じたデータの取り扱いが必要です。これを汎用の方法で解決するのは簡単ではありせん。
+
+ * 外部のシステムと連携するために、入力のリクエスト形式を変更する必要があるかもしれません。外部のシステムが理解できるような形式で出力するために、出力を加工する必要があるかもしれません。
+ * Droongaが標準で提供する機能よりもさらに複雑なデータ操作を、ストレージに直接アクセスしながら効率よく行う必要があるかもしれません。
+ * Droongaのデータ分散と回収ロジックをコントロールすることで、Droongaの分散性能を活かした高度なアプリケーションを構築する必要があるかもしれません。
+
+このような場合にプラグインを利用することができます。
+
+## Droongaエンジンにおけるプラガブルな操作
+
+Droonga Engineにはプラガブルなフェーズが大きく分けて2つあり、そのうちの1つは3つのサブフェーズから構成されます。プラグインの観点からすると、都合1つから4つの操作に処理を追加することができます。
+
+適合フェーズ(adaption phase)
+: このフェーズでは、プラグインは入力のリクエストと出力のレスポンスを加工できます。
+
+処理フェーズ(processing phase)
+: このフェーズでは、プラグインはそれぞれのボリューム上で入力のリクエストを逐次処理します。
+
+処理フェーズにはプラグインでの拡張が可能な3つのサブフェーズがあります:
+
+ハンドリング・フェーズ(handling phase)
+: このフェーズでは、プラグインは低レベルのデータ処理、たとえばデータベース操作などを行うことができます。
+
+立案フェーズ(planning phase)
+: このフェーズでは、プラグインは入力のリクエストを複数のステップに分割できます。
+
+集約フェーズ(collection phase)
+: このフェーズでは、プラグインはステップから得られた結果をマージして一つの結果を生成できます。
+
+上記の説明は、Droongaシステムの設計に基いているので、少々わかりづらいかもしれません。
+ここでは、プラグインでの拡張が可能な操作という観点から離れて、プラグインで何をしたいのかという観点から見てみましょう。
+
+既に存在するコマンドを元にして新たなコマンドを作成する
+: たとえば、複雑な`search`コマンドをラップして、手軽に使えるコマンドを作りたいかもしれません。
+  リクエストとレスポンスに対する*適合(adaption)*がそれを可能にします。
+
+新しいコマンドを追加してストレージを操作する
+: たとえば、ストレージに保存されているデータを自在に操作したいかもしれません。
+  リクエストに対する*ハンドリング(handling)*がそれを可能にします。
+
+複雑なタスクを実現するコマンドを追加する
+: たとえば、標準の`search`コマンドのような強力なコマンドを実装したいかもしれません。リクエストに対する*立案(planning)*と*収集(collection)*がそれを可能にします。
+
+このチュートリアルでは、最初に*適合(adaption)*を扱います。
+これはもっともプラグインの基本的なユースケースなので、Droongaにおけるプラグイン開発の基礎を理解する助けになるはずです。
+その後、他のケースも上述の順で説明します。
+このチュートリアルに従うと、プラグインの書き方を理解できるようになります。
+自分独自の要求を満たすプラグインを作成するための第一歩となることでしょう。
+
+## プラグインを開発するには
+
+詳細は以下のサブチュートリアルを参照してください:
+
+ 1. [リクエストとレスポンスを加工し、既存のコマンドに基づいた新たなコマンドを作成する][adapter]。
+ 2. [全てのボリューム上でリクエストを処理し、ストレージを操作する新たなコマンドを追加する][handler]。
+ 3. 特定のボリューム上だけでリクエストを処理し、より効率的にストレージを操作するコマンドを追加する (準備中)
+ 4. リクエストの分散とレスポンスの回収を行い、新たな複雑なコマンドを追加する (準備中)
+
+
+  [basic tutorial]: ../basic/
+  [overview]: ../../overview/
+  [adapter]: ./adapter/
+  [handler]: ./handler/
+  [distribute-collect]: ./distribute-collect/

  Added: ja/tutorial/1.1.0/virtual-machines-for-experiments/index.md (+250 -0) 100644
===================================================================
--- /dev/null
+++ ja/tutorial/1.1.0/virtual-machines-for-experiments/index.md    2014-11-30 23:20:40 +0900 (f772a2b)
@@ -0,0 +1,250 @@
+---
+title: "Droongaチュートリアル: 実験用の仮想マシンを用意する手順"
+layout: ja
+---
+
+{% comment %}
+##############################################
+  THIS FILE IS AUTOMATICALLY GENERATED FROM
+  "_po/ja/tutorial/1.1.0/virtual-machines-for-experiments/index.po"
+  DO NOT EDIT THIS FILE MANUALLY!
+##############################################
+{% endcomment %}
+
+
+* TOC
+{:toc}
+
+## チュートリアルのゴール
+
+実験用に複数(3台)の仮想マシンを用意する手順を学ぶこと。
+
+## なぜ仮想マシンが必要なのか?
+
+Droongaは分散型のデータ処理システムなので、クラスタを構成するには複数の台のコンピュータを用意する必要があります。
+安全のためにも(そしてより良い性能を得るためにも)、Droongaノードにはそれ用のコンピュータを用意することが望ましいです。
+
+有効なレプリケーションのためには、2台以上のコンピュータが必要です。
+また、クラスタ構成の管理を試してみるには、3台以上のコンピュータが必要となります。
+
+しかしながら、仮に、単に検証や開発をしたい場合でも、複数の仮想マシンのインスタンスをVPSサービスで利用するにはお金がかかります。
+そのような用途では、あなたの手持ちのコンピュータ上でプライベートな仮想マシンを使うのがおすすめです。
+
+幸いなことに、[Vagrant][]を使うと仮想マシンを簡単に管理することができます。
+このチュートリアルでは、Vagrantを使って*3台の仮想マシンを用意する手順*を解説します。
+
+## ホストマシンを用意する
+
+まず最初に、仮想マシンのホストとなるPCを用意する必要があります。
+仮想マシンは多くのRAMを要求する場合があるため、ホストマシンにはできれば8GB以上のメモリがあることが望ましいです。
+
+多くの場合、メジャーなプラットフォーム向けにはビルド済みのバイナリが使われるため、それほど多くのRAMは必要ではありません。
+しかしながら、仮想マシン上で動作する環境がマイナーなディストリビューションであったり、そのディストリビューションの最新のバージョンであった場合には、その環境向けのビルド済みバイナリが用意されていないことがあり得ます。そのような場合、バイナリは自動的にコンパイルされますが、その際に2GB程度のメモリが要求されます。
+ネイティブ拡張のビルド時に奇妙なエラーに遭遇した場合は、仮想マシンのメモリの割り当て量を増やして再度インストールを行って下さい。
+([このチュートリアルの付録も参照して下さい](#less-size-memory)。)
+
+## 仮想マシンを用意する手順
+
+### VirtualBoxをインストールする
+
+Vagrantには、仮想マシンを実行するためのバックエンドが必要です。ここでは推奨環境の[VirtualBox][]をインストールすることにします。
+例えば、ホストマシンが[Ubuntu][]で動作するPCなのであれば、VirtualBoxは以下のように`apt`コマンドでインストールできます:
+
+~~~
+$ sudo apt-get install virtualbox
+~~~
+
+その他の環境では、[VirtualBoxのWebサイト][VirtualBox]にある手順に従ってVirtualBoxをインストールして下さい。
+
+### Vagrantをインストールする
+
+次に、[Vagrant][]をインストールします。[VagrantのWebサイト][Vagrant]にある手順に従って、Vagrantをインストールして下さい。
+例えば、ホストマシンがx64のUbuntu PCなのであれば、以下の要領です:
+
+~~~
+$ wget https://dl.bintray.com/mitchellh/vagrant/vagrant_1.6.5_x86_64.deb
+$ sudo dpkg -i vagrant_1.6.5_x86_64.deb
+~~~
+
+注意: Ubuntu 14.04では`apt-get install vagrant`でもVagrantをインストールできますが、これは使わないで下さい。この方法でインストールできるVagrantはバージョンが古いため、[Vagrant Cloud][]からboxをインポートできません。
+
+### boxの種類を決めて、Vagrantfileを用意する
+
+[Vagrant Cloud][]のサイトから、実験に使うためのboxを選びます。
+例えば[Ubuntu Trusty (x64)のbox](https://vagrantcloud.com/ubuntu/boxes/trusty64)を使うのであれば、以下のようにします:
+
+~~~
+$ mkdir droonga-ubuntu-trusty
+$ cd droonga-ubuntu-trusty
+$ vagrant init ubuntu/trusty64
+~~~
+
+この操作で、設定ファイルの`Vagrantfile`が自動生成されます。
+しかし、このファイルはDroongaクラスタの実験のために、以下のように完全に書き換えてしまいます:
+
+`Vagrantfile`:
+
+~~~
+n_machines = 3
+box        = "ubuntu/trusty64"
+
+VAGRANTFILE_API_VERSION = "2"
+Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
+  n_machines.times do |index|
+    config.vm.define :"node#{index}" do |node_config|
+      node_config.vm.box = box
+      node_config.vm.network(:private_network,
+                             :ip => "192.168.100.#{50 + index}")
+      node_config.vm.host_name = "node#{index}"
+      node_config.vm.provider("virtualbox") do |virtual_box|
+        virtual_box.memory = 2048
+      end
+    end
+  end
+end
+~~~
+
+注:この`Vagrantfile`では、3つの仮想マシンを2GB(2048MB)のメモリを伴って定義しています。
+ですので、ホストマシンは6GB以上のメモリを搭載している必要があります。
+もしホストマシンのメモリがそこまで多くないのであれば、この時点では`512`(512MB)などの適当な値を設定しておいて下さい。
+
+### 仮想マシンを起動する
+
+仮想マシンは、`vagrant up`というコマンドで起動できます:
+
+~~~
+$ vagrant up
+Bringing machine 'node0' up with 'virtualbox' provider...
+Bringing machine 'node1' up with 'virtualbox' provider...
+Bringing machine 'node2' up with 'virtualbox' provider...
+...
+~~~
+
+これにより、Vagrantは自動的に仮想マシンのイメージを[Vagrant Cloud][]のWebサイトからダウンロードし、それが終わり次第仮想マシンを起動します。
+用意が完了すると、仮想ネットワーク上のIPアドレスとして`192.168.100.50`、`192.168.100.51`、`192.168.100.52`をそれぞれ持つ3台の仮想マシンが動作している状態になります。
+
+仮想マシンが正しく動いていることを確認しましょう。
+仮想マシンには`vagrant ssh`コマンドを使って以下のようにログインできます:
+
+~~~
+$ vagrant ssh node0
+Welcome to Ubuntu 14.04.1 LTS (GNU/Linux 3.13.0-36-generic x86_64)
+...
+vagrant �� node0:~$ exit
+~~~
+
+
+### 仮想マシンをSSHクライアントに登録する
+
+仮想マシンにログインするためには、通常の`ssh`コマンドではなく、`vagrant ssh`コマンドを使わなくてはなりません。
+また、その前には`Vagrantfile`があるディレクトリに`cd`する必要もあります。
+これは少々面倒です。
+
+ですので、SSHクライアントのローカル設定ファイルに、以下のようにして仮想マシンのための設定を追加しておきましょう:
+
+~~~
+$ vagrant ssh-config node0 >> ~/.ssh/config
+$ vagrant ssh-config node1 >> ~/.ssh/config
+$ vagrant ssh-config node2 >> ~/.ssh/config
+~~~
+
+これで、`vagrant ssh`コマンドを使わずとも、仮想マシンの名前を指定してログインできるようになります:
+
+~~~
+$ ssh node0
+~~~
+
+### 仮想マシン同士で互いのホスト名を名前解決できるように設定する
+
+ネームサーバがないので、各仮想マシンはお互いのホスト名を名前解決する事ができません。
+そのため、現時点ではそれぞれのIPアドレスを直接書く必要があります。
+これは非常に面倒です。
+
+なので、各仮想マシンのhostsファイルを以下のように編集しておきましょう:
+
+`/etc/hosts`:
+
+~~~
+127.0.0.1 localhost
+192.168.100.50 node0
+192.168.100.51 node1
+192.168.100.52 node2
+~~~
+
+これで、各マシンはお互いにホスト名を指定して通信できるようになります。
+
+### 仮想マシンの終了
+
+仮想マシンは、`vagrant halt`コマンドでまとめて終了できます:
+
+~~~
+$ vagrant halt
+~~~
+
+これで、Vagrantがすべての仮想マシンを完全に終了させてくれます。
+
+### 仮想マシンで行った変更を取り消す
+
+仮想マシンの中で行ったすべての変更を取り消したい場合は、単純に`vagrant destroy -f`というコマンドを実行すればOKです:
+
+~~~
+$ vagrant destroy -f
+$ vagrant up
+~~~
+
+これで、すべての変更を取り消して、仮想マシンをまっさらの状態で起動し直すことができます。
+この方法はインストールスクリプトの改修などの作業をする時に便利でしょう。
+
+### 付録: ホストマシンのRAMがそれほど多くない場合 {#less-size-memory}
+
+手持ちのコンピュータが十分なサイズのメモリを搭載していないとしても、諦める必要はありません。
+
+各仮想マシンに2GBのメモリが必要になるのは、[Rroonga][]のネイティブ拡張をビルドする必要があるからです。
+言い換えると、既にビルド済みのバイナリがあるのであれば、Droongaノードはそこまでのメモリがなくても動作します。
+
+ですので、以下のようにすると、各仮想マシンに順番にDroongaのサービスをインストールしていくことができます:
+
+ 1. `vagrant halt`ですべての仮想マシンを終了する。
+ 2. `virtualbox`でVirtualBoxのコンソールを開く。
+ 3. 1台の仮想マシンのプロパティを開き、メモリの大きさを2GB(2048MB)に設定し直す。
+ 4. VirtualBoxのコンソールからその仮想マシンを起動する。
+ 5. 仮想マシンにログインし、Droongaのサービスをインストールする。
+ 6. 仮想マシンを終了する。
+ 7. 仮想マシンのプロパティを開き、メモリの大きさを元に戻す。
+ 8. 3から7の手順を他の仮想マシンにも繰り返す。
+
+### 付録: 他のコンピュータから仮想マシン上のサービスに直接アクセスする
+
+ホストマシンが(リモートにある)サーバで、あなたが主に手元の別のPCを操作している状況において、仮想マシン内で動作しているHTTPサーバに手元のPCから直接接続したいと思うことがあるかもしれません。
+例えば、Google Chrome、Mozilla FirefoxのようなWebブラウザを使って管理ページを操作してみたい場合などです。
+
+このような場面では、OpenSSHのポートフォワーディング機能を使うと良いでしょう。
+以下のコマンドをホストマシン上で実行してください。
+
+~~~
+% ssh vagrant �� 192.168.100.50 \
+      -i ~/.vagrant.d/insecure_private_key \
+      -g \
+      -L 20041:localhost:10041
+~~~
+
+これにより、仮想マシン`node0`(`192.168.100.50`)上の`droonga-http-server`が提供している管理ページに、`http://(ホストマシンのIPアドレスまたはホスト名):20041/`というURLで実際にアクセスする事ができます。
+この時、ホストマシン上で動作しているOpenSSHのクライアントによって、`20041`番ポートに流れ込んできたパケットはすべて仮想マシン内の`10041`番ポートに転送されます。
+
+ * `vagrant@` というユーザ名と認証に使う秘密鍵を指定する必要があるので注意してください。
+ * ホストコンピュータ自身の外から来るリクエストを受け付けるためには、`-g`オプションの指定が必要です。
+
+## まとめ
+
+このチュートリアルでは、Droongaノード用に3台の仮想マシンを用意する手順を学びました。
+
+これで、[「使ってみる」のチュートリアル](../groonga/)を複数ノードで実践できます。
+
+  [Vagrant]: https://www.vagrantup.com/
+  [Vagrant Cloud]: https://vagrantcloud.com/
+  [VirtualBox]: https://www.virtualbox.org/
+  [Groonga]: http://groonga.org/
+  [Rroonga]: https://github.com/ranguba/rroonga
+  [Ubuntu]: http://www.ubuntu.com/
+  [Droonga]: https://droonga.org/
+  [Groonga]: http://groonga.org/

  Added: ja/tutorial/1.1.0/watch.md (+217 -0) 100644
===================================================================
--- /dev/null
+++ ja/tutorial/1.1.0/watch.md    2014-11-30 23:20:40 +0900 (abc7c11)
@@ -0,0 +1,217 @@
+---
+title: Droonga チュートリアル
+layout: ja
+---
+
+{% comment %}
+##############################################
+  THIS FILE IS AUTOMATICALLY GENERATED FROM
+  "_po/ja/tutorial/1.1.0/watch.po"
+  DO NOT EDIT THIS FILE MANUALLY!
+##############################################
+{% endcomment %}
+
+
+* TOC
+{:toc}
+
+## Real-time search
+
+Droonga supports streaming-style real-time search.
+
+### Update configurations of the Droonga engine
+
+Update your fluentd.conf and catalog.jsons, like:
+
+fluentd.conf:
+
+      <source>
+        type forward
+        port 24224
+      </source>
+      <match starbucks.message>
+        name localhost:24224/starbucks
+        type droonga
+      </match>
+    + <match droonga.message>
+    +   name localhost:24224/droonga
+    +   type droonga
+    + </match>
+      <match output.message>
+        type stdout
+      </match>
+
+catalog.json:
+
+      {
+        "effective_date": "2013-09-01T00:00:00Z",
+        "zones": [
+    +     "localhost:24224/droonga",
+          "localhost:24224/starbucks"
+        ],
+        "farms": {
+    +     "localhost:24224/droonga": {
+    +       "device": ".",
+    +       "capacity": 10
+    +     },
+          "localhost:24224/starbucks": {
+            "device": ".",
+            "capacity": 10
+          }
+        },
+        "datasets": {
+    +     "Watch": {
+    +       "workers": 2,
+    +       "plugins": ["search", "groonga", "add", "watch"],
+    +       "number_of_replicas": 1,
+    +       "number_of_partitions": 1,
+    +       "partition_key": "_key",
+    +       "date_range": "infinity",
+    +       "ring": {
+    +         "localhost:23041": {
+    +           "weight": 50,
+    +           "partitions": {
+    +             "2013-09-01": [
+    +               "localhost:24224/droonga.watch"
+    +             ]
+    +           }
+    +         }
+    +       }
+    +     },
+          "Starbucks": {
+            "workers": 0,
+            "plugins": ["search", "groonga", "add"],
+            "number_of_replicas": 2,
+            "number_of_partitions": 2,
+            "partition_key": "_key",
+            "date_range": "infinity",
+            "ring": {
+              "localhost:23041": {
+                "weight": 50,
+                "partitions": {
+                  "2013-09-01": [
+                    "localhost:24224/starbucks.000",
+                    "localhost:24224/starbucks.001"
+                  ]
+                }
+              },
+              "localhost:23042": {
+                "weight": 50,
+                "partitions": {
+                  "2013-09-01": [
+                    "localhost:24224/starbucks.002",
+                    "localhost:24224/starbucks.003"
+                  ]
+                }
+              }
+            }
+          }
+        },
+        "options": {
+          "plugins": []
+        }
+      }
+
+### Add a streaming API to the protocol adapter
+
+
+Add a streaming API to the protocol adapter, like;
+
+application.js:
+
+    var express = require('express'),
+        droonga = require('express-droonga');
+    
+    var application = express();
+    var server = require('http').createServer(application);
+    server.listen(3000); // the port to communicate with clients
+    
+    //============== INSERTED ==============
+    var streaming = {
+      'streaming': new droonga.command.HTTPStreaming({
+        dataset: 'Watch',
+        path: '/watch',
+        method: 'GET',
+        subscription: 'watch.subscribe',
+        unsubscription: 'watch.unsubscribe',
+        notification: 'watch.notification',
+        createSubscription: function(request) {
+          return {
+            condition: request.query.query
+          };
+        }
+      })
+    };
+    //============= /INSERTED ==============
+    
+    application.droonga({
+      prefix: '/droonga',
+      tag: 'starbucks',
+      defaultDataset: 'Starbucks',
+      server: server, // this is required to initialize Socket.IO API!
+      plugins: [
+        droonga.API_REST,
+        droonga.API_SOCKET_IO,
+        droonga.API_GROONGA,
+        droonga.API_DROONGA
+    //============== INSERTED ==============
+        ,streaming
+    //============= /INSERTED ==============
+      ]
+    });
+
+    application.get('/', function(req, res) {
+      res.sendfile(__dirname + '/index.html');
+    });
+
+### Prepare feeds
+
+Prepare "feed"s like:
+
+feeds.jsons:
+
+    {"id":"feed:0","dataset":"Watch","type":"watch.feed","body":{"targets":{"key":"old place 0"}}}
+    {"id":"feed:1","dataset":"Watch","type":"watch.feed","body":{"targets":{"key":"new place 0"}}}
+    {"id":"feed:2","dataset":"Watch","type":"watch.feed","body":{"targets":{"key":"old place 1"}}}
+    {"id":"feed:3","dataset":"Watch","type":"watch.feed","body":{"targets":{"key":"new place 1"}}}
+    {"id":"feed:4","dataset":"Watch","type":"watch.feed","body":{"targets":{"key":"old place 2"}}}
+    {"id":"feed:5","dataset":"Watch","type":"watch.feed","body":{"targets":{"key":"new place 2"}}}
+
+### Try it!
+
+At first, restart servers in each console.
+
+The engine:
+
+    # fluentd --config fluentd.conf
+
+The protocol adapter:
+
+    # nodejs application.js
+
+Next, connect to the streaming API via curl:
+
+    # curl "http://localhost:3000/droonga/watch?query=new"
+
+Then the client starts to receive streamed results.
+
+Next, open a new console and send "feed"s to the engine like:
+
+    # fluent-cat droonga.message < feeds.jsons
+
+Then the client receives three results "new place 0", "new place 1", and "new place 2" like:
+
+    {"targets":{"key":"new place 0"}}
+    {"targets":{"key":"new place 1"}}
+    {"targets":{"key":"new place 2"}}
+
+They are search results for the query "new", given as a query parameter of the streaming API.
+
+Results can be appear in different order, like:
+
+    {"targets":{"key":"new place 1"}}
+    {"targets":{"key":"new place 0"}}
+    {"targets":{"key":"new place 2"}}
+
+because "feed"s are processed in multiple workers asynchronously.
+

  Added: reference/1.1.0/catalog/index.md (+13 -0) 100644
===================================================================
--- /dev/null
+++ reference/1.1.0/catalog/index.md    2014-11-30 23:20:40 +0900 (1f11907)
@@ -0,0 +1,13 @@
+---
+title: Catalog
+layout: en
+---
+
+A Droonga network consists of several resources. You need to describe
+them in **catalog**. All the nodes in the network shares the same
+catalog.
+
+Catalog specification is versioned. Here are available versions:
+
+ * [version 2](version2/)
+ * [version 1](version1/): (It is deprecated since 1.0.0.)

  Added: reference/1.1.0/catalog/version1/index.md (+320 -0) 100644
===================================================================
--- /dev/null
+++ reference/1.1.0/catalog/version1/index.md    2014-11-30 23:20:40 +0900 (f2af226)
@@ -0,0 +1,320 @@
+---
+title: Catalog
+layout: en
+---
+
+A Droonga network consists of several resources. You need to describe
+them in **catalog**. All the nodes in the network shares the same
+catalog.
+
+This documentation describes about catalog.
+
+ * TOC
+{:toc}
+
+## How to manage
+
+So far, you need to write catalog and share it to all the nodes
+manually.
+
+Some utility programs will generate catalog in near feature.
+Furthermore Droonga network will maintain and share catalog
+automatically.
+
+## Glossary
+
+This section describes terms in catalog.
+
+### Catalog
+
+Catalog is a series of data which represents the resources in the
+network.
+
+### Zone
+
+Zone is a set of farms. Farms in a zone are expected to close to each
+other, like in the same host, in the same switch, in the same network.
+
+### Farm
+
+A farm is a Droonga Engine instance. Droonga Engine is implemented as
+a [Fluentd][] plugin, fluent-plugin-droonga.
+
+A `fluentd` process can have multiple Droonga Engines. If you add one
+or more `match` entries with type `droonga` into `fluentd.conf`, a
+`fluentd` process instantiates one or more Droonga Engines.
+
+A farm has its own workers and a job queue. A farm push request to its
+job queue and workers pull a request from the job queue.
+
+### Dataset
+
+Dataset is a set of logical tables. A logical table must belong to
+only one dataset.
+
+Each dataset must have an unique name in the same Droonga network.
+
+### Logical table
+
+Logical table consists of one or more partitioned physical tables.
+Logical table doesn't have physical records. It returns physical
+records from physical tables.
+
+You can custom how to partition a logical table into one or more
+physical tables. For example, you can custom partition key, the
+number of partitions and so on.
+
+### Physical table
+
+Physical table is a table in Groonga database. It stores physical
+records to the table.
+
+### Ring
+
+Ring is a series of partition sets. Dataset must have one
+ring. Dataset creates logical tables on the ring.
+
+Droonga Engine replicates each record in a logical table into one or
+more partition sets.
+
+### Partition set
+
+Partition set is a set of partitions. A partition set stores all
+records in all logical tables in the same Droonga network. In other
+words, dataset is partitioned in a partition set.
+
+A partition set is a replication of other partition set.
+
+Droonga Engine may support partitioning in one or more partition
+sets in the future. It will be useful to use different partition
+size for old data and new data. Normally, old data are smaller and
+new data are bigger. It is reasonable that you use larger partition
+size for bigger data.
+
+### Partition
+
+Partition is a Groonga database. It has zero or more physical
+tables.
+
+### Plugin
+
+Droonga Engine can be extended by writing plugin scripts.
+In most cases, a series of plugins work cooperatively to
+achieve required behaviors.
+So, plugins are organized by behaviors.
+Each behavior can be attached to datasets and/or tables by
+adding "plugins" section to the corresponding entry in the catalog.
+
+More than one plugin can be assigned in a "plugins" section as an array.
+The order in the array controls the execution order of plugins
+when adapting messages.
+When adapting an incoming message, plugins are applied in forward order
+whereas those are applied in reverse order when adapting an outgoing message.
+
+## Example
+
+Consider the following case:
+
+ * There are two farms.
+ * All farms (Droonga Engine instances) works on the same fluentd.
+ * Each farm has two partitions.
+ * There are two replicas.
+ * There are two partitions for each table.
+
+Catalog is written as a JSON file. Its file name is `catalog.json`.
+
+Here is a `catalog.json` for the above case:
+
+~~~json
+{
+  "version": 1,
+  "effective_date": "2013-06-05T00:05:51Z",
+  "zones": ["localhost:23003/farm0", "localhost:23003/farm1"],
+  "farms": {
+    "localhost:23003/farm0": {
+      "device": "disk0",
+      "capacity": 1024
+    },
+    "localhost:23003/farm1": {
+      "device": "disk1",
+      "capacity": 1024
+    }
+  },
+  "datasets": {
+    "Wiki": {
+      "workers": 4,
+      "plugins": ["groonga", "crud", "search"],
+      "number_of_replicas": 2,
+      "number_of_partitions": 2,
+      "partition_key": "_key",
+      "date_range": "infinity",
+      "ring": {
+        "localhost:23004": {
+          "weight": 10,
+          "partitions": {
+            "2013-07-24": [
+              "localhost:23003/farm0.000",
+              "localhost:23003/farm1.000"
+            ]
+          }
+        },
+        "localhost:23005": {
+          "weight": 10,
+          "partitions": {
+            "2013-07-24": [
+              "localhost:23003/farm1.001",
+              "localhost:23003/farm0.001"
+            ]
+          }
+        }
+      }
+    }
+  }
+}
+~~~
+
+## Parameters
+
+Here are descriptions about parameters in `catalog.json`.
+
+### `version` {#version}
+
+It is a format version of the catalog file.
+
+Droonga Engine will change `catalog.json` format in the
+future. Droonga Engine can provide auto format update feature with the
+information.
+
+The value must be `1`.
+
+This is a required parameter.
+
+Example:
+
+~~~json
+{
+  "version": 1
+}
+~~~
+
+### `effective_date`
+
+It is a date string representing the day the catalog becomes
+effective.
+
+The date string format must be [W3C-DTF][].
+
+This is a required parameter.
+
+Note: fluent-plugin-droonga 0.8.0 doesn't use this value yet.
+
+Example:
+
+~~~json
+{
+  "effective_date": "2013-11-29T11:29:29Z"
+}
+~~~
+
+### `zones`
+
+`Zones` is an array to express proximities between farms.
+Farms are grouped by a zone, and zones can be grouped by another zone recursively.
+Zones make a single tree structure, expressed by nested arrays.
+Farms in a same branch are regarded as relatively closer than other farms.
+
+e.g.
+
+When the value of `zones` is as follows,
+
+```
+[["A", ["B", "C"]], "D"]
+```
+
+it expresses the following tree.
+
+       /\
+      /\ D
+     A /\
+      B  C
+
+This tree means the farm "B" and "C" are closer than "A" or "D" to each other.
+You should make elements in a `zones` close to each other, like in the
+same host, in the same switch, in the same network.
+
+This is an optional parameter.
+
+Note: fluent-plugin-droonga 0.8.0 doesn't use this value yet.
+
+Example:
+
+~~~json
+{
+  "zones": [
+    ["localhost:23003/farm0",
+     "localhost:23003/farm1"],
+    ["localhost:23004/farm0",
+     "localhost:23004/farm1"]
+  ]
+}
+~~~
+
+*TODO: Discuss about the call of this parameter. This seems completely equals to the list of keys of `farms`.*
+
+### `farms`
+
+It is an array of Droonga Engine instances.
+
+*TODO: Improve me. For example, we have to describe relations of nested farms, ex. `children`.*
+
+**Farms** correspond with fluent-plugin-droonga instances. A fluentd process may have multiple **farms** if more than one **match** entry with type **droonga** appear in the "fluentd.conf".
+Each **farm** has its own job queue.
+Each **farm** can attach to a data partition which is a part of a **dataset**.
+
+This is a required parameter.
+
+Example:
+
+~~~json
+{
+  "farms": {
+    "localhost:23003/farm0": {
+      "device": "/disk0",
+      "capacity": 1024
+    },
+    "localhost:23003/farm1": {
+      "device": "/disk1",
+      "capacity": 1024
+    }
+  }
+}
+~~~
+
+### `datasets`
+
+A **dataset** is a set of **tables** which comprise a single logical **table** virtually.
+Each **dataset** must have a unique name in the network.
+
+### `ring`
+
+`ring` is a series of partitions which comprise a dataset. `replica_count`, `number_of_partitons` and **time-slice** factors affect the number of partitions in a `ring`.
+
+### `workers`
+
+`workers` is an integer number which specifies the number of worker processes to deal with the dataset.
+If `0` is specified, no worker is forked and all operations are done in the master process.
+
+### `number_of_partitions`
+
+`number_of_partition` is an integer number which represents the number of partitions divided by the hash function. The hash function which determines where each record resides the partition in a dataset is compatible with memcached.
+
+### `date_range`
+
+`date_range` determines when to split the dataset. If a string "infinity" is assigned, dataset is never split by time factor.
+
+### `number_of_replicas`
+
+`number_of_replicas` represents the number of replicas of dataset maintained in the network.
+
+  [Fluentd]: http://fluentd.org/
+  [W3C-DTF]: http://www.w3.org/TR/NOTE-datetime "Date and Time Formats"

  Added: reference/1.1.0/catalog/version2/index.md (+860 -0) 100644
===================================================================
--- /dev/null
+++ reference/1.1.0/catalog/version2/index.md    2014-11-30 23:20:40 +0900 (50f52be)
@@ -0,0 +1,860 @@
+---
+title: Catalog
+layout: en
+---
+
+* TOC
+{:toc}
+
+## Abstract {#abstract}
+
+`Catalog` is a JSON data to manage the configuration of a Droonga cluster.
+A Droonga cluster consists of one or more `datasets`, and a `dataset` consists of other portions. They all must be explicitly described in a `catalog` and be shared with all the hosts in the cluster.
+
+## Usage {#usage}
+
+This [`version`](#parameter-version) of `catalog` will be available from Droonga 1.0.0.
+
+## Syntax {#syntax}
+
+    {
+      "version": <Version number>,
+      "effectiveDate": "<Effective date>",
+      "datasets": {
+        "<Name of the dataset 1>": {
+          "nWorkers": <Number of workers>,
+          "plugins": [
+            "Name of the plugin 1",
+            ...
+          ],
+          "schema": {
+            "<Name of the table 1>": {
+              "type"             : <"Array", "Hash", "PatriciaTrie" or "DoubleArrayTrie">
+              "keyType"          : "<Type of the primary key>",
+              "tokenizer"        : "<Tokenizer>",
+              "normalizer"       : "<Normalizer>",
+              "columns" : {
+                "<Name of the column 1>": {
+                  "type"         : <"Scalar", "Vector" or "Index">,
+                  "valueType"    : "<Type of the value>",
+                  "vectorOptions": {
+                    "weight"     : <Weight>,
+                  },
+                  "indexOptions" : {
+                    "section"    : <Section>,
+                    "weight"     : <Weight>,
+                    "position"   : <Position>,
+                    "sources"    : [
+                      "<Name of a column to be indexed>",
+                      ...
+                    ]
+                  }
+                },
+                "<Name of the column 2>": { ... },
+                ...
+              }
+            },
+            "<Name of the table 2>": { ... },
+            ...
+          },
+          "fact": "<Name of the fact table>",
+          "replicas": [
+            {
+              "dimension": "<Name of the dimension column>",
+              "slicer": "<Name of the slicer function>",
+              "slices": [
+                {
+                  "label": "<Label of the slice>",
+                  "volume": {
+                    "address": "<Address string of the volume>"
+                  }
+                },
+                ...
+              }
+            },
+            ...
+          ]
+        },
+        "<Name of the dataset 2>": { ... },
+        ...
+      }
+    }
+
+## Details {#details}
+
+### Catalog definition {#catalog}
+
+Value
+: An object with the following key/value pairs.
+
+#### Parameters
+
+##### `version` {#parameter-version}
+
+Abstract
+: Version number of the catalog file.
+
+Value
+: `2`. (Specification written in this page is valid only when this value is `2`)
+
+Default value
+: None. This is a required parameter.
+
+Inheritable
+: False.
+
+##### `effectiveDate` {#parameter-effective_date}
+
+Abstract
+: The time when this catalog becomes effective.
+
+Value
+: A local time string formatted in the [W3C-DTF](http://www.w3.org/TR/NOTE-datetime "Date and Time Formats"), with the time zone.
+
+Default value
+: None. This is a required parameter.
+
+Inheritable
+: False.
+
+##### `datasets` {#parameter-datasets}
+
+Abstract
+: Definition of datasets.
+
+Value
+: An object keyed by the name of the dataset with value the [`dataset` definition](#dataset).
+
+Default value
+: None. This is a required parameter.
+
+Inheritable
+: False.
+
+##### `nWorkers` {#parameter-n_workers}
+
+Abstract
+: The number of worker processes spawned for each database instance.
+
+Value
+: An integer value.
+
+Default value
+: 0 (No worker. All operations are done in the master process)
+
+Inheritable
+: True. Overridable in `dataset` and `volume` definition.
+
+
+#### Example
+
+A version 2 catalog effective after `2013-09-01T00:00:00Z`, with no datasets:
+
+~~~
+{
+  "version": 2,
+  "effectiveDate": "2013-09-01T00:00:00Z",
+  "datasets": {
+  }
+}
+~~~
+
+### Dataset definition {#dataset}
+
+Value
+: An object with the following key/value pairs.
+
+#### Parameters
+
+##### `plugins` {#parameter-plugins}
+
+Abstract
+: Name strings of the plugins enabled for the dataset.
+
+Value
+: An array of strings.
+
+Default value
+: None. This is a required parameter.
+
+Inheritable
+: True. Overridable in `dataset` and `volume` definition.
+
+##### `schema` {#parameter-schema}
+
+Abstract
+: Definition of tables and their columns.
+
+Value
+: An object keyed by the name of the table with value the [`table` definition](#table).
+
+Default value
+: None. This is a required parameter.
+
+Inheritable
+: True. Overridable in `dataset` and `volume` definition.
+
+##### `fact` {#parameter-fact}
+
+Abstract
+: The name of the fact table. When a `dataset` is stored as more than one `slice`, one [fact table](http://en.wikipedia.org/wiki/Fact_table) must be selected from tables defined in [`schema`](#parameter-schema) parameter.
+
+Value
+: A string.
+
+Default value
+: None.
+
+Inheritable
+: True. Overridable in `dataset` and `volume` definition.
+
+##### `replicas` {#parameter-replicas}
+
+Abstract
+: A collection of volumes which are the copies of each other.
+
+Value
+: An array of [`volume` definitions](#volume).
+
+Default value
+: None. This is a required parameter.
+
+Inheritable
+: False.
+
+#### Example
+
+A dataset with 4 workers per a database instance, with plugins `groonga`, `crud` and `search`:
+
+~~~
+{
+  "nWorkers": 4,
+  "plugins": ["groonga", "crud", "search"],
+  "schema": {
+  },
+  "replicas": [
+  ]
+}
+~~~
+
+### Table definition {#table}
+
+Value
+: An object with the following key/value pairs.
+
+#### Parameters
+
+##### `type` {#parameter-table-type}
+
+Abstract
+: Specifies which data structure is used for managing keys of the table.
+
+Value
+: Any of the following values.
+
+* `"Array"`: for tables which have no keys.
+* `"Hash"`: for hash tables.
+* `"PatriciaTrie"`: for patricia trie tables.
+* `"DoubleArrayTrie"`: for double array trie tables.
+
+Default value
+: `"Hash"`
+
+Inheritable
+: False.
+
+##### `keyType` {#parameter-keyType}
+
+Abstract
+: Data type of the key of the table. Mustn't be assigned when the `type` is `"Array"`.
+
+Value
+: Any of the following data types.
+
+* `"Integer"`       : 64bit signed integer.
+* `"Float"`         : 64bit floating-point number.
+* `"Time"`          : Time value with microseconds resolution.
+* `"ShortText"`     : Text value up to 4095 bytes length.
+* `"TokyoGeoPoint"` : Tokyo Datum based geometric point value.
+* `"WGS84GeoPoint"` : [WGS84](http://en.wikipedia.org/wiki/World_Geodetic_System) based geometric point value.
+
+Default value
+: None. Mandatory for tables with keys.
+
+Inheritable
+: False.
+
+##### `tokenizer` {#parameter-tokenizer}
+
+Abstract
+: Specifies the type of tokenizer used for splitting each text value, when the table is used as a lexicon. Only available when the `keyType` is `"ShortText"`.
+
+Value
+: Any of the following tokenizer names.
+
+* `"TokenDelimit"`
+* `"TokenUnigram"`
+* `"TokenBigram"`
+* `"TokenTrigram"`
+* `"TokenBigramSplitSymbol"`
+* `"TokenBigramSplitSymbolAlpha"`
+* `"TokenBigramSplitSymbolAlphaDigit"`
+* `"TokenBigramIgnoreBlank"`
+* `"TokenBigramIgnoreBlankSplitSymbol"`
+* `"TokenBigramIgnoreBlankSplitSymbolAlpha"`
+* `"TokenBigramIgnoreBlankSplitSymbolAlphaDigit"`
+* `"TokenDelimitNull"`
+
+Default value
+: None.
+
+Inheritable
+: False.
+
+##### `normalizer` {#parameter-normalizer}
+
+Abstract
+: Specifies the type of normalizer which normalizes and restricts the key values. Only available when the `keyType` is `"ShortText"`.
+
+Value
+: Any of the following normalizer names.
+
+* `"NormalizerAuto"`
+* `"NormalizerNFKC51"`
+
+Default value
+: None.
+
+Inheritable
+: False.
+
+##### `columns` {#parameter-columns}
+
+Abstract
+: Column definition for the table.
+
+Value
+: An object keyed by the name of the column with value the [`column` definition](#column).
+
+Default value
+: None.
+
+Inheritable
+: False.
+
+#### Examples
+
+##### Example 1: Hash table
+
+A `Hash` table whose key is `ShortText` type, with no columns:
+
+~~~
+{
+  "type": "Hash",
+  "keyType": "ShortText",
+  "columns": {
+  }
+}
+~~~
+
+##### Example 2: PatriciaTrie table
+
+A `PatriciaTrie` table with `TokenBigram` tokenizer and `NormalizerAuto` normalizer, with no columns:
+
+~~~
+{
+  "type": "PatriciaTrie",
+  "keyType": "ShortText",
+  "tokenizer": "TokenBigram",
+  "normalizer": "NormalizerAuto",
+  "columns": {
+  }
+}
+~~~
+
+### Column definition {#column}
+
+Value
+
+: An object with the following key/value pairs.
+
+#### Parameters
+
+##### `type` {#parameter-column-type}
+
+Abstract
+: Specifies the quantity of data stored as each column value.
+
+Value
+: Any of the followings.
+
+* `"Scalar"`: A single value.
+* `"Vector"`: A list of values.
+* `"Index"` : A set of unique values with additional properties respectively. Properties can be specified in [`indexOptions`](#parameter-indexOptions).
+
+Default value
+: `"Scalar"`
+
+Inheritable
+: False.
+
+##### `valueType` {#parameter-valueType}
+
+Abstract
+: Data type of the column value.
+
+Value
+: Any of the following data types or the name of another table defined in the same dataset. When a table name is assigned, the column acts as a foreign key references the table.
+
+* `"Bool"`          : `true` or `false`.
+* `"Integer"`       : 64bit signed integer.
+* `"Float"`         : 64bit floating-point number.
+* `"Time"`          : Time value with microseconds resolution.
+* `"ShortText"`     : Text value up to 4,095 bytes length.
+* `"Text"`          : Text value up to 2,147,483,647 bytes length.
+* `"TokyoGeoPoint"` : Tokyo Datum based geometric point value.
+* `"WGS84GeoPoint"` : [WGS84](http://en.wikipedia.org/wiki/World_Geodetic_System) based geometric point value.
+
+Default value
+: None. This is a required parameter.
+
+Inheritable
+: False.
+
+##### `vectorOptions` {#parameter-vectorOptions}
+
+Abstract
+: Specifies the optional properties of a "Vector" column.
+
+Value
+: An object which is a [`vectorOptions` definition](#vectorOptions)
+
+Default value
+: `{}` (Void object).
+
+Inheritable
+: False.
+
+##### `indexOptions` {#parameter-indexOptions}
+
+Abstract
+: Specifies the optional properties of an "Index" column.
+
+Value
+: An object which is an [`indexOptions` definition](#indexOptions)
+
+Default value
+: `{}` (Void object).
+
+Inheritable
+: False.
+
+#### Examples
+
+##### Example 1: Scalar column
+
+A scaler column to store `ShortText` values:
+
+~~~
+{
+  "type": "Scalar",
+  "valueType": "ShortText"
+}
+~~~
+
+##### Example 2: Vector column
+
+A vector column to store `ShortText` values with weight:
+
+~~~
+{
+  "type": "Scalar",
+  "valueType": "ShortText",
+  "vectorOptions": {
+    "weight": true
+  }
+}
+~~~
+
+##### Example 3: Index column
+
+A column to index `address` column on `Store` table:
+
+~~~
+{
+  "type": "Index",
+  "valueType": "Store",
+  "indexOptions": {
+    "sources": [
+      "address"
+    ]
+  }
+}
+~~~
+
+### vectorOptions definition {#vectorOptions}
+
+Value
+: An object with the following key/value pairs.
+
+#### Parameters
+
+##### `weight` {#parameter-vectorOptions-weight}
+
+Abstract
+: Specifies whether the vector column stores the weight data or not. Weight data is used for indicating the importance of the value.
+
+Value
+: A boolean value (`true` or `false`).
+
+Default value
+: `false`.
+
+Inheritable
+: False.
+
+#### Example
+
+Store the weight data.
+
+~~~
+{
+  "weight": true
+}
+~~~
+
+### indexOptions definition {#indexOptions}
+
+Value
+: An object with the following key/value pairs.
+
+#### Parameters
+
+##### `section` {#parameter-indexOptions-section}
+
+Abstract
+: Specifies whether the index column stores the section data or not. Section data is typically used for distinguishing in which part of the sources the value appears.
+
+Value
+: A boolean value (`true` or `false`).
+
+Default value
+: `false`.
+
+Inheritable
+: False.
+
+##### `weight` {#parameter-indexOptions-weight}
+
+Abstract
+: Specifies whether the index column stores the weight data or not. Weight data is used for indicating the importance of the value in the sources.
+
+Value
+: A boolean value (`true` or `false`).
+
+Default value
+: `false`.
+
+Inheritable
+: False.
+
+##### `position` {#parameter-indexOptions-position}
+
+Abstract
+: Specifies whether the index column stores the position data or not. Position data is used for specifying the position where the value appears in the sources. It is indispensable for fast and accurate phrase-search.
+
+Value
+: A boolean value (`true` or `false`).
+
+Default value
+: `false`.
+
+Inheritable
+: False.
+
+##### `sources` {#parameter-indexOptions-sources}
+
+Abstract
+: Makes the column an inverted index of the referencing table's columns.
+
+Value
+: An array of column names of the referencing table assigned as [`valueType`](#parameter-valueType).
+
+Default value
+: None.
+
+Inheritable
+: False.
+
+#### Example
+
+Store the section data, the weight data and the position data.
+Index `name` and `address` on the referencing table.
+
+~~~
+{
+  "section": true,
+  "weight": true,
+  "position": true
+  "sources": [
+    "name",
+    "address"
+  ]
+}
+~~~
+
+### Volume definition {#volume}
+
+Abstract
+: A unit to compose a dataset. A dataset consists of one or more volumes. A volume consists of either a single instance of database or a collection of `slices`. When a volume consists of a single database instance, `address` parameter must be assigned and the other parameters must not be assigned. Otherwise, `dimension`, `slicer` and `slices` are required, and vice versa.
+
+Value
+: An object with the following key/value pairs.
+
+#### Parameters
+
+##### `address` {#parameter-address}
+
+Abstract
+: Specifies the location of the database instance.
+
+Value
+: A string in the following format.
+
+      ${host_name}:${port_number}/${tag}.${name}
+
+  * `host_name`: The name of host that has the database instance.
+  * `port_number`: The port number for the database instance.
+  * `tag`: The tag of the database instance. The tag name can't include `.`. You can use multiple tags for one host name and port number pair.
+  * `name`: The name of the databases instance. You can use multiple names for one host name, port number and tag triplet.
+
+Default value
+: None.
+
+Inheritable
+: False.
+
+##### `dimension` {#parameter-dimension}
+
+Abstract
+: Specifies the dimension to slice the records in the fact table. Either '_key" or a scalar type column can be selected from [`columns`](#parameter-columns) parameter of the fact table. See [dimension](http://en.wikipedia.org/wiki/Dimension_%28data_warehouse%29).
+
+Value
+: A string.
+
+Default value
+: `"_key"`
+
+Inheritable
+: True. Overridable in `dataset` and `volume` definition.
+
+##### `slicer` {#parameter-slicer}
+
+Abstract
+: Function to slice the value of dimension column.
+
+Value
+: Name of slicer function.
+
+Default value
+: `"hash"`
+
+Inheritable
+: True. Overridable in `dataset` and `volume` definition.
+
+In order to define a volume which consists of a collection of `slices`,
+the way how slice records into slices must be decided.
+
+The slicer function that specified as `slicer` and
+the column (or key) specified as `dimension`,
+which is input for the slicer function, defines that.
+
+Slicers are categorized into three types. Here are three types of slicers:
+
+Ratio-scaled
+: *Ratio-scaled slicers* slice datapoints in the specified ratio,
+  e.g. hash function of _key.
+  Slicers of this type are:
+  
+  * `hash`
+
+Ordinal-scaled
+: *Ordinal-scaled slicers* slice datapoints with ordinal values;
+  the values have some ranking, e.g. time, integer,
+  element of `{High, Middle, Low}`.
+  Slicers of this type are:
+  
+  * (not implemented yet)
+
+Nominal-scaled
+: *Nominal-scaled slicers* slice datapoints with nominal values;
+  the values denotes categories,which have no order,
+  e.g. country, zip code, color.
+  Slicers of this type are:
+  
+  * (not implemented yet)
+
+##### `slices` {#parameter-slices}
+
+Abstract
+: Definition of slices which store the contents of the data.
+
+Value
+: An array of [`slice` definitions](#slice).
+
+Default value
+: None.
+
+Inheritable
+: False.
+
+#### Examples
+
+##### Example 1: Single instance
+
+A volume at "localhost:24224/volume.000":
+
+~~~
+{
+  "address": "localhost:24224/volume.000"
+}
+~~~
+
+##### Example 2: Slices
+
+A volume that consists of three slices, records are to be distributed according to `hash`,
+which is ratio-scaled slicer function, of `_key`.
+
+~~~
+{
+  "dimension": "_key",
+  "slicer": "hash",
+  "slices": [
+    {
+      "volume": {
+        "address": "localhost:24224/volume.000"
+      }
+    },
+    {
+      "volume": {
+        "address": "localhost:24224/volume.001"
+      }
+    },
+    {
+      "volume": {
+        "address": "localhost:24224/volume.002"
+      }
+    }
+  ]
+~~~
+
+### Slice definition {#slice}
+
+Abstract
+: Definition of each slice. Specifies the range of sliced data and the volume to store the data.
+
+Value
+: An object with the following key/value pairs.
+
+#### Parameters
+
+##### `weight` {#parameter-slice-weight}
+
+Abstract
+: Specifies the share in the slices. Only available when the `slicer` is ratio-scaled.
+
+Value
+: A numeric value.
+
+Default value
+: `1`.
+
+Inheritable
+: False.
+
+##### `label` {#parameter-label}
+
+Abstract
+: Specifies the concrete value that slicer may return. Only available when the slicer is nominal-scaled.
+
+Value
+: A value of the dimension column data type. When the value is not provided, this slice is regarded as *else*; matched only if all other labels are not matched. Therefore, only one slice without `label` is allowed in slices.
+
+Default value
+: None.
+
+Inheritable
+: False.
+
+##### `boundary` {#parameter-boundary}
+
+Abstract
+: Specifies the concrete value that can compare with `slicer`'s return value. Only available when the `slicer` is ordinal-scaled.
+
+Value
+: A value of the dimension column data type. When the value is not provided, this slice is regarded as *else*; this means the slice is open-ended. Therefore, only one slice without `boundary` is allowed in a slices.
+
+Default value
+: None.
+
+Inheritable
+: False.
+
+##### `volume` {#parameter-volume}
+
+Abstract
+: A volume to store the data which corresponds to the slice.
+
+Value
+
+: An object which is a [`volume` definition](#volume)
+
+Default value
+: None.
+
+Inheritable
+: False.
+
+#### Examples
+
+##### Example 1: Ratio-scaled
+
+Slice for a ratio-scaled slicer, with the weight `1`:
+
+~~~
+{
+  "weight": 1,
+  "volume": {
+  }
+}
+~~~
+
+##### Example 2: Nominal-scaled
+
+Slice for a nominal-scaled slicer, with the label `"1"`:
+
+~~~
+{
+  "label": "1",
+  "volume": {
+  }
+}
+~~~
+
+##### Example 3: Ordinal-scaled
+
+Slice for a ordinal-scaled slicer, with the boundary `100`:
+
+~~~
+{
+  "boundary": 100,
+  "volume": {
+  }
+}
+~~~
+
+## Realworld example
+
+See the catalog of [basic tutorial].
+
+  [basic tutorial]: ../../../tutorial/basic

  Added: reference/1.1.0/commands/add/index.md (+253 -0) 100644
===================================================================
--- /dev/null
+++ reference/1.1.0/commands/add/index.md    2014-11-30 23:20:40 +0900 (08ec054)
@@ -0,0 +1,253 @@
+---
+title: add
+layout: en
+---
+
+* TOC
+{:toc}
+
+## Abstract {#abstract}
+
+The `add` command adds a new record to the specified table. Column values of the existing record are updated by given values, if the table has a primary key and there is existing record with the specified key.
+
+## API types {#api-types}
+
+### HTTP {#api-types-http}
+
+Request endpoint
+: `(Document Root)/droonga/add`
+
+Request methd
+: `POST`
+
+Request URL parameters
+: Nothing.
+
+Request body
+: A hash of [parameters](#parameters).
+
+Response body
+: A [response message](#response).
+
+### REST {#api-types-rest}
+
+Not supported.
+
+### Fluentd {#api-types-fluentd}
+
+Style
+: Request-Response. One response message is always returned per one request.
+
+`type` of the request
+: `add`
+
+`body` of the request
+: A hash of [parameters](#parameters).
+
+`type` of the response
+: `add.result`
+
+## Parameter syntax {#syntax}
+
+If the table has a primary key column:
+
+    {
+      "table"  : "<Name of the table>",
+      "key"    : "<The primary key of the record>",
+      "values" : {
+        "<Name of the column 1>" : <value 1>,
+        "<Name of the column 2>" : <value 2>,
+        ...
+      }
+    }
+
+If the table has no primary key column:
+
+    {
+      "table"  : "<Name of the table>",
+      "values" : {
+        "<Name of the column 1>" : <value 1>,
+        "<Name of the column 2>" : <value 2>,
+        ...
+      }
+    }
+
+## Usage {#usage}
+
+This section describes how to use the `add` command, via a typical usage with following two tables:
+
+Person table (without primary key):
+
+|name|job (referring the Job table)|
+|Alice Arnold|announcer|
+|Alice Cooper|musician|
+
+Job table (with primary key)
+
+|_key|label|
+|announcer|announcer|
+|musician|musician|
+
+
+### Adding a new record to a table without primary key {#adding-record-to-table-without-key}
+
+Specify only `table` and `values`, without `key`, if the table has no primary key.
+
+    {
+      "type" : "add",
+      "body" : {
+        "table"  : "Person",
+        "values" : {
+          "name" : "Bob Dylan",
+          "job"  : "musician"
+        }
+      }
+    }
+    
+    => {
+         "type" : "add.result",
+         "body" : true
+       }
+
+The `add` command works recursively. If there is no existing record with the key in the referred table, then it is also automatically added silently so you'll see no error response. For example this will add a new Person record with a new Job record named `doctor`.
+
+    {
+      "type" : "add",
+      "body" : {
+        "table"  : "Person",
+        "values" : {
+          "name" : "Alice Miller",
+          "job"  : "doctor"
+        }
+      }
+    }
+    
+    => {
+         "type" : "add.result",
+         "body" : true
+       }
+
+By the command above, a new record will be automatically added to the Job table like;
+
+|_key|label|
+|announcer|announcer|
+|musician|musician|
+|doctor|(blank)|
+
+
+### Adding a new record to a table with primary key {#adding-record-to-table-with-key}
+
+Specify all parameters `table`, `values` and `key`, if the table has a primary key column.
+
+    {
+      "type" : "add",
+      "body" : {
+        "table"  : "Job",
+        "key"    : "writer",
+        "values" : {
+          "label" : "writer"
+        }
+      }
+    }
+    
+    => {
+         "type" : "add.result",
+         "body" : true
+       }
+
+### Updating column values of an existing record {#updating}
+
+This command works as "updating" operation, if the table has a primary key column and there is an existing record for the specified key.
+
+    {
+      "type" : "add",
+      "body" : {
+        "table"  : "Job",
+        "key"    : "doctor",
+        "values" : {
+          "label" : "doctor"
+        }
+      }
+    }
+    
+    => {
+         "type" : "add.result",
+         "body" : true
+       }
+
+
+You cannot update column values of existing records, if the table has no primary key column. Then this command will always work as "adding" operation for the table.
+
+
+## Parameter details {#parameters}
+
+### `table` {#parameter-table}
+
+Abstract
+: The name of a table which a record is going to be added to.
+
+Value
+: A name string of an existing table.
+
+Default value
+: Nothing. This is a required parameter.
+
+### `key` {#parameter-key}
+
+Abstract
+: The primary key for the record going to be added.
+
+Value
+: A primary key string.
+
+Default value
+: Nothing. This is required if the table has a primary key column. Otherwise, this is ignored.
+
+Existing column values will be updated, if there is an existing record for the key.
+
+This parameter will be ignored if the table has no primary key column.
+
+### `values` {#parameter-values}
+
+Abstract
+: New values for columns of the record.
+
+Value
+: A hash. Keys of the hash are column names, values of the hash are new values for each column.
+
+Default value
+: `null`
+
+Value of unspecified columns will not be changed.
+
+
+## Responses {#response}
+
+This returns a boolean value `true` like following as the response's `body`, with `200` as its `statusCode`, if a record is successfully added or updated.
+
+    true
+
+## Error types {#errors}
+
+This command reports errors not only [general errors](/reference/message/#error) but also followings.
+
+### `MissingTableParameter`
+
+Means you've forgotten to specify the `table` parameter. The status code is `400`.
+
+### `MissingPrimaryKeyParameter`
+
+Means you've forgotten to specify the `key` parameter, for a table with the primary key column. The status code is `400`.
+
+### `InvalidValue`
+
+Means you've specified an invalid value for a column. For example, a string for a geolocation column, a string for an integer column, etc. The status code is `400`.
+
+### `UnknownTable`
+
+Means you've specified a table which is not existing in the specified dataset. The status code is `404`.
+
+### `UnknownColumn`
+
+Means you've specified any column which is not existing in the specified table. The status code is `404`.
+

  Added: reference/1.1.0/commands/column-create/index.md (+101 -0) 100644
===================================================================
--- /dev/null
+++ reference/1.1.0/commands/column-create/index.md    2014-11-30 23:20:40 +0900 (56622bf)
@@ -0,0 +1,101 @@
+---
+title: column_create
+layout: en
+---
+
+* TOC
+{:toc}
+
+## Abstract {#abstract}
+
+The `column_create` command creates a new column into the specified table.
+
+This is compatible to [the `column_create` command of the Groonga](http://groonga.org/docs/reference/commands/column_create.html).
+
+## API types {#api-types}
+
+### HTTP {#api-types-http}
+
+Request endpoint
+: `(Document Root)/d/column_create`
+
+Request methd
+: `GET`
+
+Request URL parameters
+: Same to the list of [parameters](#parameters).
+
+Request body
+: Nothing.
+
+Response body
+: A [response message](#response).
+
+### REST {#api-types-rest}
+
+Not supported.
+
+### Fluentd {#api-types-fluentd}
+
+Style
+: Request-Response. One response message is always returned per one request.
+
+`type` of the request
+: `column_create`
+
+`body` of the request
+: A hash of [parameters](#parameters).
+
+`type` of the response
+: `column_create.result`
+
+## Parameter syntax {#syntax}
+
+    {
+      "table"  : "<Name of the table>",
+      "name"   : "<Name of the column>",
+      "flags"  : "<Flags for the column>",
+      "type"   : "<Type of the value>",
+      "source" : "<Name of a column to be indexed>"
+    }
+
+## Parameter details {#parameters}
+
+All parameters except `table` and `name` are optional.
+
+They are compatible to [the parameters of the `column_create` command of the Groonga](http://groonga.org/docs/reference/commands/column_create.html#parameters). See the linked document for more details.
+
+## Responses {#response}
+
+This returns an array meaning the result of the operation, as the `body`.
+
+    [
+      [
+        <Groonga's status code>,
+        <Start time>,
+        <Elapsed time>
+      ],
+      <Column is successfully created or not>
+    ]
+
+This command always returns a response with `200` as its `statusCode`, because this is a Groonga compatible command and errors of this command must be handled in the way same to Groonga's one.
+
+Response body's details:
+
+Status code
+: An integer meaning the operation's result. Possible values are:
+  
+   * `0` (`Droonga::GroongaHandler::Status::SUCCESS`) : Successfully processed.
+   * `-22` (`Droonga::GroongaHandler::Status::INVALID_ARGUMENT`) : There is any invalid argument.
+
+Start time
+: An UNIX time which the operation was started on.
+
+Elapsed time
+: A decimal of seconds meaning the elapsed time for the operation.
+
+Column is successfully created or not
+: A boolean value meaning the column was successfully created or not. Possible values are:
+  
+   * `true`:The column was successfully created.
+   * `false`:The column was not created.

  Added: reference/1.1.0/commands/column-list/index.md (+94 -0) 100644
===================================================================
--- /dev/null
+++ reference/1.1.0/commands/column-list/index.md    2014-11-30 23:20:40 +0900 (6167700)
@@ -0,0 +1,94 @@
+---
+title: column_list
+layout: en
+---
+
+* TOC
+{:toc}
+
+## Abstract {#abstract}
+
+The `column_list` command reports the list of all existing columns in a table.
+
+This is compatible to [the `column_list` command of the Groonga](http://groonga.org/docs/reference/commands/column_list.html).
+
+## API types {#api-types}
+
+### HTTP {#api-types-http}
+
+Request endpoint
+: `(Document Root)/d/column_list`
+
+Request methd
+: `GET`
+
+Request URL parameters
+: Same to the list of [parameters](#parameters).
+
+Request body
+: Nothing.
+
+Response body
+: A [response message](#response).
+
+### REST {#api-types-rest}
+
+Not supported.
+
+### Fluentd {#api-types-fluentd}
+
+Style
+: Request-Response. One response message is always returned per one request.
+
+`type` of the request
+: `column_list`
+
+`body` of the request
+: A hash of [parameters](#parameters).
+
+`type` of the response
+: `column_list.result`
+
+## Parameter syntax {#syntax}
+
+    {
+      "table" : "<Name of the table>"
+    }
+
+## Parameter details {#parameters}
+
+The only one parameter `table` is required.
+
+They are compatible to [the parameters of the `column_list` command of the Groonga](http://groonga.org/docs/reference/commands/column_list.html#parameters). See the linked document for more details.
+
+## Responses {#response}
+
+This returns an array meaning the result of the operation, as the `body`.
+
+    [
+      [
+        <Groonga's status code>,
+        <Start time>,
+        <Elapsed time>
+      ],
+      <List of columns>
+    ]
+
+The structure of the returned array is compatible to [the returned value of the Groonga's `table_list` command](http://groonga.org/docs/reference/commands/column_list.html#return-value). See the linked document for more details.
+
+This command always returns a response with `200` as its `statusCode`, because this is a Groonga compatible command and errors of this command must be handled in the way same to Groonga's one.
+
+Response body's details:
+
+Status code
+: An integer which means the operation's result. Possible values are:
+  
+   * `0` (`Droonga::GroongaHandler::Status::SUCCESS`) : Successfully processed.
+   * `-22` (`Droonga::GroongaHandler::Status::INVALID_ARGUMENT`) : There is any invalid argument.
+
+Start time
+: An UNIX time which the operation was started on.
+
+Elapsed time
+: A decimal of seconds meaning the elapsed time for the operation.
+

  Added: reference/1.1.0/commands/column-remove/index.md (+98 -0) 100644
===================================================================
--- /dev/null
+++ reference/1.1.0/commands/column-remove/index.md    2014-11-30 23:20:40 +0900 (27be0c1)
@@ -0,0 +1,98 @@
+---
+title: column_remove
+layout: en
+---
+
+* TOC
+{:toc}
+
+## Abstract {#abstract}
+
+The `column_remove` command removes an existing column in a table.
+
+This is compatible to [the `column_remove` command of the Groonga](http://groonga.org/docs/reference/commands/column_remove.html).
+
+## API types {#api-types}
+
+### HTTP {#api-types-http}
+
+Request endpoint
+: `(Document Root)/d/column_remove`
+
+Request methd
+: `GET`
+
+Request URL parameters
+: Same to the list of [parameters](#parameters).
+
+Request body
+: Nothing.
+
+Response body
+: A [response message](#response).
+
+### REST {#api-types-rest}
+
+Not supported.
+
+### Fluentd {#api-types-fluentd}
+
+Style
+: Request-Response. One response message is always returned per one request.
+
+`type` of the request
+: `column_remove`
+
+`body` of the request
+: A hash of [parameters](#parameters).
+
+`type` of the response
+: `column_remove.result`
+
+## Parameter syntax {#syntax}
+
+    {
+      "table" : "<Name of the table>",
+      "name"  : "<Name of the column>"
+    }
+
+## Parameter details {#parameters}
+
+All parameters are required.
+
+They are compatible to [the parameters of the `column_remove` command of the Groonga](http://groonga.org/docs/reference/commands/column_remove.html#parameters). See the linked document for more details.
+
+## Responses {#response}
+
+This returns an array meaning the result of the operation, as the `body`.
+
+    [
+      [
+        <Groonga's status code>,
+        <Start time>,
+        <Elapsed time>
+      ],
+      <Column is successfully removed or not>
+    ]
+
+This command always returns a response with `200` as its `statusCode`, because this is a Groonga compatible command and errors of this command must be handled in the way same to Groonga's one.
+
+Response body's details:
+
+Status code
+: An integer which means the operation's result. Possible values are:
+  
+   * `0` (`Droonga::GroongaHandler::Status::SUCCESS`) : Successfully processed.
+   * `-22` (`Droonga::GroongaHandler::Status::INVALID_ARGUMENT`) : There is any invalid argument.
+
+Start time
+: An UNIX time which the operation was started on.
+
+Elapsed time
+: A decimal of seconds meaning the elapsed time for the operation.
+
+Column is successfully removed or not
+: A boolean value meaning the column was successfully removed or not. Possible values are:
+  
+   * `true`:The column was successfully removed.
+   * `false`:The column was not removed.

  Added: reference/1.1.0/commands/column-rename/index.md (+99 -0) 100644
===================================================================
--- /dev/null
+++ reference/1.1.0/commands/column-rename/index.md    2014-11-30 23:20:40 +0900 (6350de3)
@@ -0,0 +1,99 @@
+---
+title: column_rename
+layout: en
+---
+
+* TOC
+{:toc}
+
+## Abstract {#abstract}
+
+The `column_rename` command renames an existing column in a table.
+
+This is compatible to [the `column_rename` command of the Groonga](http://groonga.org/docs/reference/commands/column_rename.html).
+
+## API types {#api-types}
+
+### HTTP {#api-types-http}
+
+Request endpoint
+: `(Document Root)/d/column_rename`
+
+Request methd
+: `GET`
+
+Request URL parameters
+: Same to the list of [parameters](#parameters).
+
+Request body
+: Nothing.
+
+Response body
+: A [response message](#response).
+
+### REST {#api-types-rest}
+
+Not supported.
+
+### Fluentd {#api-types-fluentd}
+
+Style
+: Request-Response. One response message is always returned per one request.
+
+`type` of the request
+: `column_rename`
+
+`body` of the request
+: A hash of [parameters](#parameters).
+
+`type` of the response
+: `column_rename.result`
+
+## Parameter syntax {#syntax}
+
+    {
+      "table"    : "<Name of the table>",
+      "name"     : "<Current name of the column>",
+      "new_name" : "<New name of the column>"
+    }
+
+## Parameter details {#parameters}
+
+All parameters are required.
+
+They are compatible to [the parameters of the `column_rename` command of the Groonga](http://groonga.org/docs/reference/commands/column_rename.html#parameters). See the linked document for more details.
+
+## Responses {#response}
+
+This returns an array meaning the result of the operation, as the `body`.
+
+    [
+      [
+        <Groonga's status code>,
+        <Start time>,
+        <Elapsed time>
+      ],
+      <Column is successfully renamed or not>
+    ]
+
+This command always returns a response with `200` as its `statusCode`, because this is a Groonga compatible command and errors of this command must be handled in the way same to Groonga's one.
+
+Response body's details:
+
+Status code
+: An integer which means the operation's result. Possible values are:
+  
+   * `0` (`Droonga::GroongaHandler::Status::SUCCESS`) : Successfully processed.
+   * `-22` (`Droonga::GroongaHandler::Status::INVALID_ARGUMENT`) : There is any invalid argument.
+
+Start time
+: An UNIX time which the operation was started on.
+
+Elapsed time
+: A decimal of seconds meaning the elapsed time for the operation.
+
+Column is successfully renamed or not
+: A boolean value meaning the column was successfully renamed or not. Possible values are:
+  
+   * `true`:The column was successfully renamed.
+   * `false`:The column was not renamed.

  Added: reference/1.1.0/commands/delete/index.md (+113 -0) 100644
===================================================================
--- /dev/null
+++ reference/1.1.0/commands/delete/index.md    2014-11-30 23:20:40 +0900 (3ec230f)
@@ -0,0 +1,113 @@
+---
+title: delete
+layout: en
+---
+
+* TOC
+{:toc}
+
+## Abstract {#abstract}
+
+The `delete` command removes records in a table.
+
+This is compatible to [the `delete` command of the Groonga](http://groonga.org/docs/reference/commands/delete.html).
+
+## API types {#api-types}
+
+### HTTP {#api-types-http}
+
+Request endpoint
+: `(Document Root)/d/delete`
+
+Request methd
+: `GET`
+
+Request URL parameters
+: Same to the list of [parameters](#parameters).
+
+Request body
+: Nothing.
+
+Response body
+: A [response message](#response).
+
+### REST {#api-types-rest}
+
+Not supported.
+
+### Fluentd {#api-types-fluentd}
+
+Style
+: Request-Response. One response message is always returned per one request.
+
+`type` of the request
+: `delete`
+
+`body` of the request
+: A hash of [parameters](#parameters).
+
+`type` of the response
+: `delete.result`
+
+## Parameter syntax {#syntax}
+
+    {
+      "table" : "<Name of the table>",
+      "key"   : "<Key of the record>"
+    }
+
+or
+
+    {
+      "table" : "<Name of the table>",
+      "id"    : "<ID of the record>"
+    }
+
+or
+
+    {
+      "table"  : "<Name of the table>",
+      "filter" : "<Complex search conditions>"
+    }
+
+## Parameter details {#parameters}
+
+All parameters except `table` are optional.
+However, you must specify one of `key`, `id`, or `filter` to specify the record (records) to be removed.
+
+They are compatible to [the parameters of the `delete` command of the Groonga](http://groonga.org/docs/reference/commands/delete.html#parameters). See the linked document for more details.
+
+## Responses {#response}
+
+This returns an array meaning the result of the operation, as the `body`.
+
+    [
+      [
+        <Groonga's status code>,
+        <Start time>,
+        <Elapsed time>
+      ],
+      <Records are successfully removed or not>
+    ]
+
+This command always returns a response with `200` as its `statusCode`, because this is a Groonga compatible command and errors of this command must be handled in the way same to Groonga's one.
+
+Response body's details:
+
+Status code
+: An integer which means the operation's result. Possible values are:
+  
+   * `0` (`Droonga::GroongaHandler::Status::SUCCESS`) : Successfully processed.
+   * `-22` (`Droonga::GroongaHandler::Status::INVALID_ARGUMENT`) : There is any invalid argument.
+
+Start time
+: An UNIX time which the operation was started on.
+
+Elapsed time
+: A decimal of seconds meaning the elapsed time for the operation.
+
+Records are successfully removed or not
+: A boolean value meaning specified records were successfully removed or not. Possible values are:
+  
+   * `true`:Records were successfully removed.
+   * `false`:Records were not removed.

  Added: reference/1.1.0/commands/index.md (+26 -0) 100644
===================================================================
--- /dev/null
+++ reference/1.1.0/commands/index.md    2014-11-30 23:20:40 +0900 (ea1909f)
@@ -0,0 +1,26 @@
+---
+title: Commands
+layout: en
+---
+
+Here are available commands
+
+## Built-in commands
+
+ * [search](search/): Searches data
+ * [add](add/): Adds a record
+ * system: Reports system information of the cluster
+   * [system.status](system/status/): Reports status information of the cluster
+
+## Groonga compatible commands
+
+ * [column_create](column-create/)
+ * [column_list](column-list/)
+ * [column_remove](column-remove/)
+ * [column_rename](column-rename/)
+ * [delete](delete/)
+ * [load](load/)
+ * [select](select/)
+ * [table_create](table-create/)
+ * [table_list](table-list/)
+ * [table_remove](table-remove/)

  Added: reference/1.1.0/commands/load/index.md (+116 -0) 100644
===================================================================
--- /dev/null
+++ reference/1.1.0/commands/load/index.md    2014-11-30 23:20:40 +0900 (3570770)
@@ -0,0 +1,116 @@
+---
+title: load
+layout: en
+---
+
+* TOC
+{:toc}
+
+## Abstract {#abstract}
+
+The `load` command adds new records to the specified table.
+Column values of existing records are updated by new values, if the table has a primary key and there are existing records with specified keys.
+
+This is compatible to [the `load` command of the Groonga](http://groonga.org/docs/reference/commands/load.html).
+
+## API types {#api-types}
+
+### HTTP (GET) {#api-types-http-get}
+
+Request endpoint
+: `(Document Root)/d/load`
+
+Request methd
+: `GET`
+
+Request URL parameters
+: Same to the list of [parameters](#parameters).
+
+Request body
+: Nothing.
+
+Response body
+: A [response message](#response).
+
+### HTTP (POST) {#api-types-http-post}
+
+Request endpoint
+: `(Document Root)/d/load`
+
+Request methd
+: `POST`
+
+Request URL parameters
+: Same to the list of [parameters](#parameters), except `values`.
+
+Request body
+: The value for the [parameter](#parameters) `values`.
+
+Response body
+: A [response message](#response).
+
+### REST {#api-types-rest}
+
+Not supported.
+
+### Fluentd {#api-types-fluentd}
+
+Not supported.
+
+## Parameter syntax {#syntax}
+
+    {
+      "values"     : <Array of records to be loaded>,
+      "table"      : "<Name of the table>",
+      "columns"    : "<List of column names for values, separated by ','>",
+      "ifexists"   : "<Grn_expr to determine records which should be updated>",
+      "input_type" : "<Format type of the values>"
+    }
+
+## Parameter details {#parameters}
+
+All parameters except `table` are optional.
+
+On the version {{ site.droonga_version }}, only following parameters are available. Others are simply ignored because they are not implemented.
+
+ * `values`
+ * `table`
+ * `columns`
+
+They are compatible to [the parameters of the `load` command of the Groonga](http://groonga.org/docs/reference/commands/load.html#parameters). See the linked document for more details.
+
+HTTP clients can send `values` as an URL parameter with `GET` method, or the request body with `POST` method.
+The URL parameter `values` is always ignored it it is sent with `POST` method.
+You should send data with `POST` method if there is much data.
+
+## Responses {#response}
+
+This returns an array meaning the result of the operation, as the `body`.
+
+    [
+      [
+        <Groonga's status code>,
+        <Start time>,
+        <Elapsed time>
+      ],
+      [<Number of loaded records>]
+    ]
+
+This command always returns a response with `200` as its `statusCode`, because this is a Groonga compatible command and errors of this command must be handled in the way same to Groonga's one.
+
+Response body's details:
+
+Status code
+: An integer which means the operation's result. Possible values are:
+  
+   * `0` (`Droonga::GroongaHandler::Status::SUCCESS`) : Successfully processed.
+   * `-22` (`Droonga::GroongaHandler::Status::INVALID_ARGUMENT`) : There is any invalid argument.
+
+Start time
+: An UNIX time which the operation was started on.
+
+Elapsed time
+: A decimal of seconds meaning the elapsed time for the operation.
+
+Number of loaded records
+: An positive integer meaning the number of added or updated records.

  Added: reference/1.1.0/commands/search/index.md (+1365 -0) 100644
===================================================================
--- /dev/null
+++ reference/1.1.0/commands/search/index.md    2014-11-30 23:20:40 +0900 (ab7b810)
@@ -0,0 +1,1365 @@
+---
+title: search
+layout: en
+---
+
+* TOC
+{:toc}
+
+## Abstract {#abstract}
+
+The `search` command finds records from the specified table based on given conditions, and returns found records and/or related information.
+
+This is designed as the most basic (low layer) command on Droonga, to search information from a database. When you want to add a new plugin including "search" feature, you should develop it as just a wrapper of this command, instead of developing something based on more low level technologies.
+
+## API types {#api-types}
+
+### HTTP {#api-types-http}
+
+Request endpoint
+: `(Document Root)/droonga/search`
+
+Request methd
+: `POST`
+
+Request URL parameters
+: Nothing.
+
+Request body
+: A hash of [parameters](#parameters).
+
+Response body
+: A [response message](#response).
+
+### REST {#api-types-rest}
+
+Request endpoint
+: `(Document Root)/tables/(table name)`
+
+Request methd
+: `GET`
+
+Request URL parameters
+: They are applied to corresponding [parameters](#parameters):
+  
+   * `query`: A string, applied to [`(root).(table name).condition.query`](#usage-condition-query-syntax).
+   * `match_to`: A comma-separated string, applied to [`(root).(table name).condition.matchTo`](#usage-condition-query-syntax).
+   * `sort_by`: A comma-separated string, applied to [`(root).(table name).sortBy`](#query-sortBy).
+   * `attributes`: A comma-separated string, applied to [`(root).(table name).output.attributes`](#query-output).
+   * `offset`: An integer, applied to [`(root).(table name).output.offset`](#query-output).
+   * `limit`: An integer, applied to [`(root).(table name).output.limit`](#query-output).
+   * `timeout`: An integer, applied to [`(root).timeout`](#parameter-timeout).
+
+<!--
+   * `group_by[(column name)][key]`: A string, applied to [`(root).(column name).groupBy.key`](#query-groupBy).
+   * `group_by[(column name)][max_n_sub_records]`: An integer, applied to [`(root).(column name).groupBy.maxNSubRecords`](#query-groupBy).
+   * `group_by[(column name)][attributes]`: A comma-separated string, applied to [`(root).(column name).output.attributes`](#query-output).
+   * `group_by[(column name)][attributes][(attribute name)][source]`: A string, applied to [`(root).(column name).output.attributes.(attribute name).source`](#query-output).
+   * `group_by[(column name)][attributes][(attribute name)][attributes]`: A comma-separated string, applied to [`(root).(column name).output.attributes.(attribute name).attributes`](#query-output).
+   * `group_by[(column name)][limit]`: An integer, applied to [`(root).(column name).output.limit`](#query-output).
+-->
+  
+  For example:
+  
+   * `/tables/Store?query=NY&match_to=_key&attributes=_key,*&limit=10`
+
+<!--
+   * `/tables/Store?query=NY&match_to=_key&attributes=_key,*&limit=10&group_by[location][key]=location&group_by[location][limit]=5&group_by[location][attributes]=_key,_nsubrecs`
+   * `/tables/Store?query=NY&match_to=_key&attributes=_key,*&limit=10&group_by[location][key]=location&group_by[location][limit]=5&group_by[location][attributes][_key][souce]=_key&group_by[location][attributes][_nsubrecs][souce]=_nsubrecs`
+   * `/tables/Store?query=NY&match_to=_key&limit=0&group_by[location][key]=location&group_by[location][max_n_sub_records]=5&group_by[location][limit]=5&group_by[location][attributes][sub_records][source]=_subrecs&group_by[location][attributes][sub_records][attributes]=_key,location`
+-->
+
+Request body
+: Nothing.
+
+Response body
+: A [response message](#response).
+
+### Fluentd {#api-types-fluentd}
+
+Style
+: Request-Response. One response message is always returned per one request.
+
+`type` of the request
+: `search`
+
+`body` of the request
+: A hash of [parameters](#parameters).
+
+`type` of the response
+: `search.result`
+
+## Parameter syntax {#syntax}
+
+    {
+      "timeout" : <Seconds to be timed out>,
+      "queries" : {
+        "<Name of the query 1>" : {
+          "source"    : "<Name of a table or another query>",
+          "condition" : <Search conditions>,
+          "sortBy"    : <Sort conditions>,
+          "groupBy"   : <Group conditions>,
+          "output"    : <Output conditions>
+        },
+        "<Name of the query 2>" : { ... },
+        ...
+      }
+    }
+
+## Usage {#usage}
+
+This section describes how to use this command, via a typical usage with following table:
+
+Person table (with primary key):
+
+|_key|name|age|sex|job|note|
+|Alice Arnold|Alice Arnold|20|female|announcer||
+|Alice Cooper|Alice Cooper|30|male|musician||
+|Alice Miller|Alice Miller|25|female|doctor||
+|Bob Dole|Bob Dole|42|male|lawer||
+|Bob Cousy|Bob Cousy|38|male|basketball player||
+|Bob Wolcott|Bob Wolcott|36|male|baseball player||
+|Bob Evans|Bob Evans|31|male|driver||
+|Bob Ross|Bob Ross|54|male|painter||
+|Lewis Carroll|Lewis Carroll|66|male|writer|the author of Alice's Adventures in Wonderland|
+
+Note: `name` and `note` are indexed with `TokensBigram`.
+
+### Basic usage {#usage-basic}
+
+This is a simple example to output all records of the Person table:
+
+    {
+      "type" : "search",
+      "body" : {
+        "queries" : {
+          "people" : {
+            "source" : "Person",
+            "output" : {
+              "elements"   : ["count", "records"],
+              "attributes" : ["_key", "*"],
+              "limit"      : -1
+            }
+          }
+        }
+      }
+    }
+    
+    => {
+         "type" : "search.result",
+         "body" : {
+           "people" : {
+             "count" : 9,
+             "records" : [
+               ["Alice Arnold", "Alice Arnold", 20, "female", "announcer", ""],
+               ["Alice Cooper", "Alice Cooper", 30, "male", "musician", ""],
+               ["Alice Miller", "Alice Miller", 25, "male", "doctor", ""],
+               ["Bob Dole", "Bob Dole", 42, "male", "lawer", ""],
+               ["Bob Cousy", "Bob Cousy", 38, "male", "basketball player", ""],
+               ["Bob Wolcott", "Bob Wolcott", 36, "male", "baseball player", ""],
+               ["Bob Evans", "Bob Evans", 31, "male", "driver", ""],
+               ["Bob Ross", "Bob Ross", 54, "male", "painter", ""],
+               ["Lewis Carroll", "Lewis Carroll", 66, "male", "writer",
+                "the author of Alice's Adventures in Wonderland"]
+             ]
+           }
+         }
+       }
+
+The name `people` is a temporary name for the search query and its result.
+A response of a `search` command will be returned as a hash, and the keys are same to keys of the given `queries`.
+So, this means: "name the search result of the query as `people`".
+
+Why the command above returns all informations of the table? Because:
+
+ * There is no search condition. This command matches all records in the specified table, if no condition is specified.
+ * [`output`](#query-output)'s `elements` contains `records` (and `count`) column(s). The parameter `elements` controls the returned information. Matched records are returned as `records`, the total number of matched records are returned as `count`.
+ * [`output`](#query-output)'s `limit` is `-1`. The parameter `limit` controls the number of returned records, and `-1` means "return all records".
+ * [`output`](#query-output)'s `attributes` contains two values `"_key"` and `"*"`. They mean "all columns of the Person table, including the `_key`" and it equals to `["_key", "name", "age", "sex", "job", "note"]` in this case. The parameter `attributes` controls which columns' value are returned.
+
+
+#### Search conditions {#usage-condition}
+
+Search conditions are specified via the `condition` parameter. There are two styles of search conditions: "script syntax" and "query syntax". See [`condition` parameter](#query-condition) for more details.
+
+##### Search conditions in Script syntax {#usage-condition-script-syntax}
+
+Search conditions in script syntax are similar to ECMAScript. For example, following query means "find records that `name` contains `Alice` and `age` is larger than or equal to `25`":
+
+    {
+      "type" : "search",
+      "body" : {
+        "queries" : {
+          "people" : {
+            "source"    : "Person",
+            "condition" : "name @ 'Alice' && age >= 25"
+            "output"    : {
+              "elements"   : ["count", "records"],
+              "attributes" : ["name", "age"],
+              "limit"      : -1
+            }
+          }
+        }
+      }
+    }
+
+    => {
+         "type" : "search.result",
+         "body" : {
+           "people" : {
+             "count" : 2,
+             "records" : [
+               ["Alice Arnold", 20],
+               ["Alice Cooper", 30],
+               ["Alice Miller", 25]
+             ]
+           }
+         }
+       }
+
+[Script syntax is compatible to Groonga's one](http://groonga.org/docs/reference/grn_expr/script_syntax.html). See the linked document for more details.
+
+##### Search conditions in Query syntax {#usage-condition-query-syntax}
+
+The query syntax is mainly designed for search boxes in webpages. For example, following query means "find records that `name` or `note` contain the given word, and the word is `Alice`":
+
+    {
+      "type" : "search",
+      "body" : {
+        "queries" : {
+          "people" : {
+            "source"    : "Person",
+            "condition" : {
+              "query"   : "Alice",
+              "matchTo" : ["name", "note"]
+            },
+            "output"    : {
+              "elements"   : ["count", "records"],
+              "attributes" : ["name", "note"],
+              "limit"      : -1
+            }
+          }
+        }
+      }
+    }
+    
+    => {
+         "type" : "search.result",
+         "body" : {
+           "people" : {
+             "count" : 4,
+             "records" : [
+               ["Alice Arnold", ""],
+               ["Alice Cooper", ""],
+               ["Alice Miller", ""],
+               ["Lewis Carroll",
+                "the author of Alice's Adventures in Wonderland"]
+             ]
+           }
+         }
+       }
+
+[Query syntax is compatible to Groonga's one](http://groonga.org/docs/reference/grn_expr/query_syntax.html). See the linked document for more details.
+
+
+#### Sorting of search results {#usage-sort}
+
+Returned records can be sorted by conditions specified as the `sortBy` parameter. For example, following query means "sort results by their `age`, in ascending order":
+
+    {
+      "type" : "search",
+      "body" : {
+        "queries" : {
+          "people" : {
+            "source"    : "Person",
+            "condition" : "name @ 'Alice'"
+            "sortBy"    : ["age"],
+            "output"    : {
+              "elements"   : ["count", "records"],
+              "attributes" : ["name", "age"],
+              "limit"      : -1
+            }
+          }
+        }
+      }
+    }
+    
+    => {
+         "type" : "search.result",
+         "body" : {
+           "people" : {
+             "count" : 8,
+             "records" : [
+               ["Alice Arnold", 20],
+               ["Alice Miller", 25],
+               ["Alice Cooper", 30]
+             ]
+           }
+         }
+       }
+
+If you add `-` before name of columns, then search results are returned in descending order. For example:
+
+    {
+      "type" : "search",
+      "body" : {
+        "queries" : {
+          "people" : {
+            "source"    : "Person",
+            "condition" : "name @ 'Alice'"
+            "sortBy"    : ["-age"],
+            "output"    : {
+              "elements"   : ["count", "records"],
+              "attributes" : ["name", "age"],
+              "limit"      : -1
+            }
+          }
+        }
+      }
+    }
+    
+    => {
+         "type" : "search.result",
+         "body" : {
+           "people" : {
+             "count" : 8,
+             "records" : [
+               ["Alice Cooper", 30],
+               ["Alice Miller", 25],
+               ["Alice Arnold", 20]
+             ]
+           }
+         }
+       }
+
+See [`sortBy` parameter](#query-sortBy) for more details.
+
+#### Paging of search results {#usage-paging}
+
+Search results can be retuned partially via `offset` and `limit` under the [`output`](#query-output) parameter. For example, following queries will return 20 or more search results by 10's.
+
+    {
+      "type" : "search",
+      "body" : {
+        "queries" : {
+          "people" : {
+            "source" : "Person",
+            "output" : {
+              "elements"   : ["count", "records"],
+              "attributes" : ["name"],
+              "offset"     : 0,
+              "limit"      : 10
+            }
+          }
+        }
+      }
+    }
+    
+    => returns 10 results from the 1st to the 10th.
+    
+    {
+      "type" : "search",
+      "body" : {
+        "queries" : {
+          "people" : {
+            "source" : "Person",
+            "output" : {
+              "elements"   : ["count", "records"],
+              "attributes" : ["name"],
+              "offset"     : 10,
+              "limit"      : 10
+            }
+          }
+        }
+      }
+    }
+    
+    => returns 10 results from the 11th to the 20th.
+    
+    {
+      "type" : "search",
+      "body" : {
+        "queries" : {
+          "people" : {
+            "source" : "Person",
+            "output" : {
+              "elements"   : ["count", "records"],
+              "attributes" : ["name"],
+              "offset"     : 20,
+              "limit"      : 10
+            }
+          }
+        }
+      }
+    }
+    
+    => returns 10 results from the 21st to the 30th.
+
+The value `-1` is not recommended  for the `limit` parameter, in regular use. It will return too much results and increase traffic loads. Instead `100` or less value is recommended for the `limit` parameter. Then you should do paging by the `offset` parameter.
+
+See [`output` parameter](#query-output) for more details.
+
+Moreover, you can do paging via [the `sortBy` parameter](#query-sortBy-hash) and it will work faster than the paging by the `output` parameter. You should do paging via the `sortBy` parameter instead of `output` as much as possible.
+
+
+#### Output format {#usage-format}
+
+Search result records in examples above are shown as arrays of arrays, but they can be returned as arrays of hashes by the [`output`](#query-output)'s `format` parameter. If you specify `complex` for the `format`, then results are returned like:
+
+    {
+      "type" : "search",
+      "body" : {
+        "queries" : {
+          "people" : {
+            "source" : "Person",
+            "output" : {
+              "elements"   : ["count", "records"],
+              "attributes" : ["_key", "name", "age", "sex", "job", "note"],
+              "limit"      : 3,
+              "format"     : "complex"
+            }
+          }
+        }
+      }
+    }
+    
+    => {
+         "type" : "search.result",
+         "body" : {
+           "people" : {
+             "count" : 9,
+             "records" : [
+               { "_key" : "Alice Arnold",
+                 "name" : "Alice Arnold",
+                 "age"  : 20,
+                 "sex"  : "female",
+                 "job"  : "announcer",
+                 "note" : "" },
+               { "_key" : "Alice Cooper",
+                 "name" : "Alice Cooper",
+                 "age"  : 30,
+                 "sex"  : "male",
+                 "job"  : "musician",
+                 "note" : "" },
+               { "_key" : "Alice Miller",
+                 "name" : "Alice Miller",
+                 "age"  : 25,
+                 "sex"  : "female",
+                 "job"  : "doctor",
+                 "note" : "" }
+             ]
+           }
+         }
+       }
+
+Search result records will be returned as an array of hashes, when you specify `complex` as the value of the `format` parameter.
+Otherwise - `simple` or nothing is specified -, records are returned as an array of arrays.
+
+See [`output` parameters](#query-output) and [responses](#response) for more details.
+
+
+### Advanced usage {#usage-advanced}
+
+#### Grouping {#usage-group}
+
+You can group search results by a column, via the [`groupBy`](#query-groupBy) parameters. For example, following query returns a result grouped by the `sex` column, with the count of original search results:
+
+    {
+      "type" : "search",
+      "body" : {
+        "queries" : {
+          "sexuality" : {
+            "source"  : "Person",
+            "groupBy" : "sex",
+            "output"  : {
+              "elements"   : ["count", "records"],
+              "attributes" : ["_key", "_nsubrecs"],
+              "limit"      : -1
+            }
+          }
+        }
+      }
+    }
+    
+    => {
+         "type" : "search.result",
+         "body" : {
+           "sexuality" : {
+             "count" : 2,
+             "records" :
+               ["female", 2],
+               ["male", 7]
+             ]
+           }
+         }
+       }
+
+The result means: "There are two `female` records and seven `male` records, moreover there are two types for the column `sex`.
+
+You can also extract the ungrouped record by the `maxNSubRecords` parameter and the `_subrecs` virtual column. For example, following query returns the result grouped by `sex` and extract two ungrouped records:
+
+    {
+      "type" : "search",
+      "body" : {
+        "queries" : {
+          "sexuality" : {
+            "source"  : "Person",
+            "groupBy" : {
+              "keys"           : "sex",
+              "maxNSubRecords" : 2
+            },
+            "output"  : {
+              "elements"   : ["count", "records"],
+              "attributes" : [
+                "_key",
+                "_nsubrecs",
+                { "label"      : "subrecords",
+                  "source"     : "_subrecs",
+                  "attributes" : ["name"] }
+              ],
+              "limit"      : -1
+            }
+          }
+        }
+      }
+    }
+    
+    => {
+         "type" : "search.result",
+         "body" : {
+           "sexuality" : {
+             "count" : 2,
+             "records" :
+               ["female", 2, [["Alice Arnold"], ["Alice Miller"]]],
+               ["male",   7, [["Alice Cooper"], ["Bob Dole"]]]
+             ]
+           }
+         }
+       }
+
+
+See [`groupBy` parameters](#query-groupBy) for more details.
+
+
+#### Multiple search queries in one request {#usage-multiple-queries}
+
+Multiple queries can be appear in one `search` command. For example, following query searches people younger than 25 or older than 40:
+
+    {
+      "type" : "search",
+      "body" : {
+        "queries" : {
+          "junior" : {
+            "source"    : "Person",
+            "condition" : "age <= 25",
+            "output"    : {
+              "elements"   : ["count", "records"],
+              "attributes" : ["name", "age"],
+              "limit"      : -1
+            }
+          },
+          "senior" : {
+            "source"    : "Person",
+            "condition" : "age >= 40",
+            "output"    : {
+              "elements"   : ["count", "records"],
+              "attributes" : ["name", "age"],
+              "limit"      : -1
+            }
+          }
+        }
+      }
+    }
+    
+    => {
+         "type" : "search.result",
+         "body" : {
+           "junior" : {
+             "count" : 2,
+             "records" : [
+               ["Alice Arnold", 20],
+               ["Alice Miller", 25]
+             ]
+           },
+           "senior" : {
+             "count" : 3,
+             "records" : [
+               ["Bob Dole", 42],
+               ["Bob Ross", 54],
+               ["Lewis Carroll", 66]
+             ]
+           }
+         }
+       }
+
+Each search result can be identified by the temporary name given for each query.
+
+#### Chained search queries {#usage-chain}
+
+You can specify not only an existing table, but search result of another query also, as the value of the "source" parameter. Chained search queries can do flexible search in just one request.
+
+For example, the following query returns two results: records that their `name` contains `Alice`, and results grouped by their `sex` column:
+
+    {
+      "type" : "search",
+      "body" : {
+        "queries" : {
+          "people" : {
+            "source"    : "Person",
+            "condition" : "name @ 'Alice'"
+            "output"    : {
+              "elements"   : ["count", "records"],
+              "attributes" : ["name", "age"],
+              "limit"      : -1
+            }
+          },
+          "sexuality" : {
+            "source"  : "people",
+            "groupBy" : "sex",
+            "output"  : {
+              "elements"   : ["count", "records"],
+              "attributes" : ["_key", "_nsubrecs"],
+              "limit"      : -1
+            }
+          }
+        }
+      }
+    }
+    
+    => {
+         "type" : "search.result",
+         "body" : {
+           "people" : {
+             "count" : 8,
+             "records" : [
+               ["Alice Cooper", 30],
+               ["Alice Miller", 25],
+               ["Alice Arnold", 20]
+             ]
+           },
+           "sexuality" : {
+             "count" : 2,
+             "records" :
+               ["female", 2],
+               ["male", 1]
+             ]
+           }
+         }
+       }
+
+You can use search queries just internally, without output. For example, the following query does: 1) group records of the Person table by their `job` column, and 2) extract grouped results which have the text `player` in their `job`. (*Note: The second query will be done without indexes, so it can be slow.)
+
+    {
+      "type" : "search",
+      "body" : {
+        "queries" : {
+          "allJob" : {
+            "source"  : "Person",
+            "groupBy" : "job"
+          },
+          "playerJob" : {
+            "source"    : "allJob",
+            "condition" : "_key @ `player`",
+            "output"  : {
+              "elements"   : ["count", "records"],
+              "attributes" : ["_key", "_nsubrecs"],
+              "limit"      : -1
+            }
+          }
+        }
+      }
+    }
+    
+    => {
+         "type" : "search.result",
+         "body" : {
+           "playerJob" : {
+             "count" : 2,
+             "records" : [
+               ["basketball player", 1],
+               ["baseball player", 1]
+             ]
+           }
+         }
+       }
+
+
+## Parameter details {#parameters}
+
+### Container parameters {#container-parameters}
+
+#### `timeout` {#parameter-timeout}
+
+*Note: This parameter is not implemented yet on the version {{ site.droonga_version }}.
+
+Abstract
+: Threshold to time out for the request.
+
+Value
+: An integer in milliseconds.
+
+Default value
+: `10000` (10 seconds)
+
+Droonga Engine will return an error response instead of a search result, if the search operation take too much time, longer than the given `timeout`.
+Clients may free resources for the search operation after the timeout.
+
+#### `queries` {#parameter-queries}
+
+Abstract
+: Search queries.
+
+Value
+: A hash. Keys of the hash are query names, values of the hash are [queries (hashes of query parameters)](#query-parameters).
+
+Default value
+: Nothing. This is a required parameter.
+
+You can put multiple search queries in a `search` request.
+
+On the {{ site.droonga_version }}, all search results for a request are returned in one time. In the future, as an optional behaviour, each result can be returned as separated messages progressively.
+
+### Parameters of each query {#query-parameters}
+
+#### `source` {#query-source}
+
+Abstract
+: A source of a search operation.
+
+Value
+: A name string of an existing table, or a name of another query.
+
+Default value
+: Nothing. This is a required parameter.
+
+You can do a facet search, specifying a name of another search query as its source.
+
+The order of operations is automatically resolved by Droonga itself.
+You don't have to write queries in the order they should be operated in.
+
+#### `condition` {#query-condition}
+
+Abstract
+: Conditions to search records from the given source.
+
+Value
+: Possible patterns:
+  
+  1. A [script syntax](http://groonga.org/docs/reference/grn_expr/script_syntax.html) string.
+  2. A hash including [script syntax](http://groonga.org/docs/reference/grn_expr/script_syntax.html) string.
+  3. A hash including [query syntax](http://groonga.org/docs/reference/grn_expr/query_syntax.html) string.
+  4. An array of conditions from 1 to 3 and an operator.
+
+Default value
+: Nothing.
+
+If no condition is given, then all records in the source will appear as the search result, for following operations and the output.
+
+##### Search condition in a Script syntax string {#query-condition-script-syntax-string}
+
+This is a sample condition in the script syntax:
+
+    "name == 'Alice' && age >= 20"
+
+It means "the value of the `name` column equals to `"Alice"`, and the value of the `age` column is `20` or more".
+
+See [the reference document of the script syntax on Groonga](http://groonga.org/docs/reference/grn_expr/script_syntax.html) for more details.
+
+##### Search condition in a hash based on the Script syntax {#query-condition-script-syntax-hash}
+
+In this pattern, you'll specify a search condition as a hash based on a 
+[script syntax string](#query-condition-script-syntax-string), like:
+
+    {
+      "script"      : "name == 'Alice' && age >= 20",
+      "allowUpdate" : true
+    }
+
+(*Note: under construction because the specification of the `allowUpdate` parameter is not defined yet.)
+
+##### Search condition in a hash based on the Query syntax {#query-condition-query-syntax-hash}
+
+In this pattern, you'll specify a search condition as a hash like:
+
+    {
+      "query"                    : "Alice",
+      "matchTo"                  : ["name * 2", "job * 1"],
+      "defaultOperator"          : "&&",
+      "allowPragma"              : true,
+      "allowColumn"              : true,
+      "matchEscalationThreshold" : 10
+    }
+
+`query`
+: A string to specify the main search query. In most cases, a text posted via a search box in a webpage will be given.
+  See [the document of the query syntax in Groonga](http://groonga.org/docs/reference/grn_expr/query_syntax.html) for more details.
+  This parameter is always required.
+
+`matchTo`
+: An array of strings, meaning the list of column names to be searched by default. If you specify no column name in the `query`, it will work as a search query for columns specified by this parameter.
+  You can apply weighting for each column, like `name * 2`.
+  This parameter is optional.
+
+`defaultOperator`
+: A string to specify the default logical operator for multiple queries listed in the `query`. Possible values:
+  
+   * `"&&"` : means "AND" condition.
+   * `"||"` : means "OR" condition.
+   * `"-"`  : means ["NOT" condition](http://groonga.org/docs/reference/grn_expr/query_syntax.html#logical-not).
+  
+  This parameter is optional, the default value is `"&&"`.
+
+`allowPragma`
+: A boolean value to allow (`true`) or disallow (`false`) to use "pragma" like `*E-1`, on the head of the `query`.
+  This parameter is optional, the default value is `true`.
+
+`allowColumn`
+: A boolean value to allow (`true`) or disallow (`false`) to specify column name for each query in the `query`, like `name:Alice`.
+  This parameter is optional, the default value is `true`.
+
+`allowLeadingNot`
+: A boolean value to allow (`true`) or disallow (`false`) to appear "negative expression" on the first query in the `query`, like `-foobar`.
+  This parameter is optional, the default value is `false`.
+
+`matchEscalationThreshold`
+: An integer to specify the threshold to escalate search methods.
+  When the number of search results by indexes is smaller than this value, then Droonga does the search based on partial matching, etc.
+  See also [the specification of the search behavior of Groonga](http://groonga.org/docs/spec/search.html) for more details.
+  This parameter is optional, the default value is `0`.
+
+
+##### Complex search condition as an array {#query-condition-array}
+
+In this pattern, you'll specify a search condition as an array like:
+
+    [
+      "&&",
+      <search condition 1>,
+      <search condition 2>,
+      ...
+    ]
+
+The fist element of the array is an operator string. Possible values:
+
+ * `"&&"` : means "AND" condition.
+ * `"||"` : means "OR" condition.
+ * `"-"`  : means ["NOT" condition](http://groonga.org/docs/reference/grn_expr/query_syntax.html#logical-not).
+
+Rest elements are logically operated based on the operator.
+For example this is an "AND" operated condition based on two conditions, means "the value of the `name` equals to `"Alice"`, and, the value of the `age` is `20` or more":
+
+    ["&&", "name == 'Alice'", "age >= 20"]
+
+Nested array means more complex conditions. For example, this means "`name` equals to `"Alice"` and `age` is `20` or more, but `job` does not equal to `"engineer"`":
+
+    [
+      "-",
+      ["&&", "name == 'Alice'", "age >= 20"],
+      "job == 'engineer'"
+    ]
+
+#### `sortBy` {#query-sortBy}
+
+Abstract
+: Conditions for sorting and paging.
+
+Value
+: Possible patterns:
+  
+  1. An array of column name strings.
+  2. A hash including an array of sort column name strings and paging conditions.
+
+Default value
+: Nothing.
+
+If sort conditions are not specified, then all results will appear as-is, for following operations and the output.
+
+##### Basic sort condition {#query-sortBy-array}
+
+Sort condition is given as an array of column name strings.
+
+At first Droonga tries to sort records by the value of the first given sort column. After that, if there are multiple records which have same value for the column, then Droonga tries to sort them by the secondary given sort column. These processes are repeated for all given sort columns.
+
+You must specify sort columns as an array, even if there is only one column.
+
+Records are sorted by the value of the column value, in an ascending order. Results can be sorted in descending order if sort column name has a prefix `-`.
+
+For example, this condition means "sort records by the `name` at first in an ascending order, and sort them by their `age~ column in the descending order":
+
+    ["name", "-age"]
+
+##### Paging of sorted results {#query-sortBy-hash}
+
+Paging conditions can be specified as a part of a sort condition hash, like:
+
+    {
+      "keys"   : [<Sort columns>],
+      "offset" : <Offset of paging>,
+      "limit"  : <Number of results to be extracted>
+    }
+
+`keys`
+: Sort conditions same to [the basic sort condition](#query-sortBy-array).
+  This parameter is always required.
+
+`offset`
+: An integer meaning the offset to the paging of sorted results. Possible values are `0` or larger integers.
+  
+  This parameter is optional and the default value is `0`.
+
+`limit`
+: An integer meaning the number of sorted results to be extracted. Possible values are `-1`, `0`, or larger integers. The value `-1` means "return all results".
+  
+  This parameter is optional and the default value is `-1`.
+
+For example, this condition extracts 10 sorted results from 11th to 20th:
+
+    {
+      "keys"   : ["name", "-age"],
+      "offset" : 10,
+      "limit"  : 10
+    }
+
+In most cases, paging by a sort condition is faster than paging by `output`'s `limit` and `output`, because this operation reduces the number of records.
+
+
+#### `groupBy` {#query-groupBy}
+
+Abstract
+: A condition for grouping of (sorted) search results.
+
+Value
+: Possible patterns:
+  
+  1. A condition string to do grouping. (a column name or an expression)
+  2. A hash to specify a condition for grouping with details.
+
+Default value
+: Nothing.
+
+If a condition for grouping is given, then grouped result records will appear as the result, for following operations and the output.
+
+##### Basic condition of grouping {#query-groupBy-string}
+
+A condition of grouping is given as a string of a column name or an expression.
+
+Droonga groups (sorted) search result records, based on the value of the specified column. Then the result of the grouping will appear instead of search results from the `source`. Result records of a grouping will have following columns:
+
+`_key`
+: A value of the grouped column.
+
+`_nsubrecs`
+: An integer meaning the number of grouped records.
+
+For example, this condition means "group records by their `job` column's value, with the number of grouped records for each value":
+
+    "job"
+
+##### Condition of grouping with details {#query-groupBy-hash}
+
+A condition of grouping can include more options, like:
+
+    {
+      "key"            : "<Basic condition for grouping>",
+      "maxNSubRecords" : <Number of sample records included into each grouped result>
+    }
+
+`key`
+: A string meaning [a basic condition of grouping](#query-groupBy-string).
+  This parameter is always required.
+
+`maxNSubRecords`
+: An integer, meaning maximum number of sample records included into each grouped result. Possible values are `0` or larger. `-1` is not acceptable.
+  
+  This parameter is optional, the default value is `0`.
+  
+For example, this condition will return results grouped by their `job` column with one sample record per a grouped result:
+
+    {
+      "key"            : "job",
+      "maxNSubRecords" : 1
+    }
+
+Grouped results will have all columns of [the result of the basic conditions for grouping](#query-groupBy-string), and following extra columns:
+
+`_subrecs`
+: An array of sample records which have the value in its grouped column.
+  
+*Note: On the version {{ site.droonga_version }}, too many records can be returned larger than the specified `maxNSubRecords`, if the dataset has multiple volumes. This is a known problem and to be fixed in a future version.
+
+
+#### `output` {#query-output}
+
+Abstract
+: A output definition for a search result
+
+Value
+: A hash including information to control output format.
+
+Default value
+: Nothing.
+
+If no `output` is given, then search results of the query won't be exported to the returned message.
+You can reduce processing time and traffic via omitting of `output` for temporary tables which are used only for grouping and so on.
+
+An output definition is given as a hash like:
+
+    {
+      "elements"   : [<Names of elements to be exported>],
+      "format"     : "<Format of each record>",
+      "offset"     : <Offset of paging>,
+      "limit"      : <Number of records to be exported>,
+      "attributes" : <Definition of columnst to be exported for each record>
+    }
+
+`elements`
+: An array of strings, meaning the list of elements exported to the result of the search query in a [search response](#response).
+  Possible values are following, and you must specify it as an array even if you export just one element:
+  
+   * `"startTime"`
+   * `"elapsedTime"`
+   * `"count"`
+   * `"attributes"`
+   * `"records"`
+  
+  This parameter is optional, there is not default value. Nothing will be exported if no element is specified.
+
+`format`
+: A string meaning the format of exported each record.
+  Possible values:
+  
+   * `"simple"`  : Each record will be exported as an array of column values.
+   * `"complex"` : Each record will be exported as a hash.
+  
+  This parameter is optional, the default value is `"simple"`.
+
+`offset`
+: An integer meaning the offset to the paging of exported records. Possible values are `0` or larger integers.
+  
+  This parameter is optional and the default value is `0`.
+
+`limit`
+: An integer meaning the number of exported records. Possible values are `-1`, `0`, or larger integers. The value `-1` means "export all records".
+  
+  This parameter is optional and the default value is `0`.
+
+`attributes`
+: Definition of columns to be exported for each record.
+  Possible patterns:
+  
+   1. An array of column definitions.
+   2. A hash of column definitions.
+  
+  Each column can be defined in one of following styles:
+  
+   * A name string of a column.
+     * `"name"` : Exports the value of the `name` column, as is.
+     * `"age"`  : Exports the value of the `age` column, as is.
+   * A hash with details:
+     * This exports the value of the `name` column as a column with different name `realName`.
+       
+           { "label" : "realName", "source" : "name" }
+       
+     * This exports the snippet in HTML fragment as a column with the name `html`.
+       
+           { "label" : "html", "source": "snippet_html(name)" }
+       
+     * This exports a static value `"Japan"` for the `country` column of all records.
+       (This will be useful for debugging, or a use case to try modification of APIs.)
+       
+           { "label" : "country", "source" : "'Japan'" }
+       
+     * This exports a number of grouped records as the `"itemsCount"` column of each record (grouped result).
+       
+           { "label" : "itemsCount", "source" : "_nsubrecs", }
+       
+     * This exports samples of the source records of grouped records, as the `"items"` column of grouped records.
+       The format of the `"attributes"` is just same to this section.
+       
+           { "label" : "items", "source" : "_subrecs",
+             "attributes": ["name", "price"] }
+  
+  An array of column definitions can contain any type definition described above, like:
+  
+      [
+        "name",
+        "age",
+        { "label" : "realName", "source" : "name" }
+      ]
+  
+  In this case, you can use a special column name `"*"` which means "all columns except special columns like `_key`".
+  
+    * `["*"]` exports all columns (except `_key` and `_id`), as is.
+    * `["_key", "*"]` exports exports all columns as is, with preceding `_key`.
+    * `["*", "_nsubrecs"]` exports exports all columns as is, with following `_nsubrecs`.
+  
+  A hash of column definitions can contain any type definition described above except `label` of hashes, because keys of the hash means `label` of each column, like:
+  
+      {
+        "name"     : "name",
+        "age"      : "age",
+        "realName" : { "source" : "name" },
+        "country"  : { "source" : "'Japan'" }
+      }
+  
+  This parameter is optional, there is no default value. No column will be exported if no column is specified.
+
+
+## Responses {#response}
+
+This command returns a hash as the result as the `body`, with `200` as the `statusCode`.
+
+Keys of the result hash is the name of each query (a result of a search query), values of the hash is the result of each [search query](#query-parameters), like:
+
+    {
+      "<Name of the query 1>" : {
+        "startTime"   : "<Time to start the operation>",
+        "elapsedTime" : <Elapsed time to process the query, in milliseconds),
+        "count"       : <Number of records searched by the given conditions>,
+        "attributes"  : <Array or hash of exported columns>,
+        "records"     : [<Array of search result records>]
+      },
+      "<Name of the query 2>" : { ... },
+      ...
+    }
+
+A hash of a search query's result can have following elements, but only some elements specified in the `elements` of the [`output` parameter](#query-output) will appear in the response.
+
+### `startTime` {#response-query-startTime}
+
+A local time string meaning the search operation is started.
+
+It is formatted in the [W3C-DTF](http://www.w3.org/TR/NOTE-datetime "Date and Time Formats"), with the time zone like:
+
+    2013-11-29T08:15:30+09:00
+
+### `elapsedTime` {#response-query-elapsedTime}
+
+An integer meaning the elapsed time of the search operation, in milliseconds.
+
+### `count` {#response-query-count}
+
+An integer meaning the total number of search result records.
+Paging options `offset` and `limit` in [`sortBy`](#query-sortBy) or [`output`](#query-output) will not affect to this count.
+
+### `attributes` and `records` {#response-query-attributes-and-records}
+
+ * `attributes` is an array or a hash including information of exported columns for each record.
+ * `records` is an array of search result records.
+
+There are two possible patterns of `attributes` and `records`, based on the [`output`](#query-output)'s `format` parameter.
+
+#### Simple format result {#response-query-simple-attributes-and-records}
+
+A search result with `"simple"` as the value of `output`'s `format` will be returned as a hash like:
+
+    {
+      "startTime"   : "<Time to start the operation>",
+      "elapsedTime" : <Elapsed time to process the query),
+      "count"       : <Total number of search result records>,
+      "attributes"  : [
+        { "name"   : "<Name of the column 1>",
+          "type"   : "<Type of the column 1>",
+          "vector" : <It this column is a vector column?> },
+        { "name"   : "<Name of the column 2>",
+          "type"   : "<Type of the column 2>",
+          "vector" : <It this column is a vector column?> },
+        { "name"       : "<Name of the column 3 (with subrecords)>"
+          "attributes" : [
+          { "name"   : "<Name of the column 3-1>",
+            "type"   : "<Type of the column 3-1>",
+            "vector" : <It this column is a vector column?> },
+          { "name"   : "<Name of the the column 3-2>",
+            "type"   : "<Type of the the column 3-2>",
+            "vector" : <It this column is a vector column?> },
+          ],
+          ...
+        },
+        ...
+      ],
+      "records"     : [
+        [<Value of the column 1 of the record 1>,
+         <Value of the column 2 of the record 1>,
+         [
+          [<Value of the column of 3-1 of the subrecord 1 of the record 1>,
+           <Value of the column of 3-2 of the subrecord 2 of the record 1>,
+           ...],
+          [<Value of the column of 3-1 of the subrecord 1 of the record 1>,
+           <Value of the column of 3-2 of the subrecord 2 of the record 1>,
+           ...],
+          ...],
+         ...],
+        [<Value of the column 1 of the record 2>,
+         <Value of the column 2 of the record 2>,
+         [
+          [<Value of the column of 3-1 of the subrecord 1 of the record 2>,
+           <Value of the column of 3-2 of the subrecord 2 of the record 2>,
+           ...],
+          [<Value of the column of 3-1 of the subrecord 1 of the record 2>,
+           <Value of the column of 3-2 of the subrecord 2 of the record 2>,
+           ...],
+          ...],
+         ...],
+        ...
+      ]
+    }
+
+This format is designed to reduce traffic with small responses, instead of useful rich data format.
+Recommended for cases when the response can include too much records, or the service can accept too much requests.
+
+##### `attributes` {#response-query-simple-attributes}
+
+An array of column informations for each exported search result, ordered by [the `output` parameter](#query-output)'s `attributes`.
+
+Each column information is returned as a hash in the form of one of these three variations corresponding to the kind of values. The hash will have the following keys respectively:
+
+###### For ordinal columns
+
+`name`
+: A string meaning the name (label) of the exported column. It is just same to labels defined in [the `output` parameter](#query-output)'s `attributes`.
+
+`type`
+: A string meaning the value type of the column.
+  The type is indicated as one of [Groonga's primitive data formats](http://groonga.org/docs/reference/types.html), or a name of an existing table for referring columns.
+
+`vector`
+: A boolean value meaning it is a [vector column](http://groonga.org/docs/tutorial/data.html#vector-types) or not.
+  Possible values:
+  
+   * `true`  : It is a vector column.
+   * `false` : It is not a vector column, but a scalar column.
+
+###### For columns corresponding to subrecords
+
+`name`
+: A string meaning the name (label) of the exported column. It is just same to labels defined in [the `output` parameter](#query-output)'s `attributes`.
+
+`attributes`
+: An array including information about columns of subrecords. The form is the same as `attributes` for (main) records. This means `attributes` has recursive structure.
+
+###### For expressions
+
+`name`
+: A string meaning the name (label) of the exported column. It is just same to labels defined in [the `output` parameter](#query-output)'s `attributes`.
+
+##### `records` {#response-query-simple-records}
+
+An array of exported search result records.
+
+Each record is exported as an array of column values, ordered by the [`output` parameter](#query-output)'s `attributes`.
+
+A value of [date time type](http://groonga.org/docs/tutorial/data.html#date-and-time-type) column will be returned as a string formatted in the [W3C-DTF](http://www.w3.org/TR/NOTE-datetime "Date and Time Formats"), with the time zone.
+
+#### Complex format result {#response-query-complex-attributes-and-records}
+
+A search result with `"complex"` as the value of `output`'s `format` will be returned as a hash like:
+
+    {
+      "startTime"   : "<Time to start the operation>",
+      "elapsedTime" : <Elapsed time to process the query),
+      "count"       : <Total number of search result records>,
+      "attributes"  : {
+        "<Name of the column 1>" : { "type"   : "<Type of the column 1>",
+                                     "vector" : <It this column is a vector column?> },
+        "<Name of the column 2>" : { "type"   : "<Type of the column 2>",
+                                     "vector" : <It this column is a vector column?> },
+        "<Name of the column 3 (with subrecords)>" : {
+          "attributes" : {
+            "<Name of the column 3-1>" : { "type"   : "<Type of the column 3-1>",
+                                           "vector" : <It this column is a vector column?> },
+            "<Name of the column 3-2>" : { "type"   : "<Type of the column 3-2>",
+                                           "vector" : <It this column is a vector column?> },
+            ...
+          }
+        },
+        ...
+      ],
+      "records"     : [
+        { "<Name of the column 1>" : <Value of the column 1 of the record 1>,
+          "<Name of the column 2>" : <Value of the column 2 of the record 1>,
+          "<Name of the column 3 (with subrecords)>" : [
+            { "<Name of the column 3-1>" : <Value of the column 3-1 of the subrecord 1 of record 1>,
+              "<Name of the column 3-2>" : <Value of the column 3-2 of the subrecord 1 of record 1>,
+              ... },
+            { "<Name of the column 3-1>" : <Value of the column 3-1 of the subrecord 2 of record 1>,
+              "<Name of the column 3-2>" : <Value of the column 3-2 of the subrecord 2 of record 1>,
+              ... },
+            ...
+          ],
+          ...                                                                },
+        { "<Name of the column 1>" : <Value of the column 1 of the record 2>,
+          "<Name of the column 2>" : <Value of the column 2 of the record 2>,
+          "<Name of the column 3 (with subrecords)>" : [
+            { "<Name of the column 3-1>" : <Value of the column 3-1 of the subrecord 1 of record 2>,
+              "<Name of the column 3-2>" : <Value of the column 3-2 of the subrecord 1 of record 2>,
+              ... },
+            { "<Name of the column 3-1>" : <Value of the column 3-1 of the subrecord 2 of record 2>,
+              "<Name of the column 3-2>" : <Value of the column 3-2 of the subrecord 2 of record 2>,
+              ... },
+            ...
+          ],
+          ...                                                                },
+        ...
+      ]
+    }
+
+This format is designed to keep human readability, instead of less traffic.
+Recommended for small traffic cases like development, debugging, features only for administrators, and so on.
+
+##### `attributes` {#response-query-complex-attributes}
+
+A hash of column informations for each exported search result. Keys of the hash are column names defined by [the `output` parameter](#query-output)'s `attributes`, values are informations of each column.
+
+Each column information is returned as a hash in the form of one of these three variations corresponding to the kind of values. The hash will have the following keys respectively:
+
+###### For ordinal columns
+
+`type`
+: A string meaning the value type of the column.
+  The type is indicated as one of [Groonga's primitive data formats](http://groonga.org/docs/reference/types.html), or a name for an existing table for referring columns.
+
+`vector`
+: A boolean value meaning it is a [vector column](http://groonga.org/docs/tutorial/data.html#vector-types) or not.
+  Possible values:
+  
+   * `true`  : It is a vector column.
+   * `false` : It is not a vector column, but a scalar column.
+
+###### For columns corresponding to subrecords
+
+`attributes`
+: An array including information about columns of subrecords. The form is the same as `attributes` for (main) records. This means `attributes` has recursive structure.
+
+###### For expressions
+
+Has no key. Just a empty hash `{}` will be returned.
+
+##### `records` {#response-query-complex-records}
+
+
+An array of exported search result records.
+
+Each record is exported as a hash. Keys of the hash are column names defined by [`output` parameter](#query-output)'s `attributes`, values are column values.
+
+A value of [date time type](http://groonga.org/docs/tutorial/data.html#date-and-time-type) column will be returned as a string formatted in the [W3C-DTF](http://www.w3.org/TR/NOTE-datetime "Date and Time Formats"), with the time zone.
+
+
+## Error types {#errors}
+
+This command reports errors not only [general errors](/reference/message/#error) but also followings.
+
+### `MissingSourceParameter`
+
+Means you've forgotten to specify the `source` parameter. The status code is `400`.
+
+### `UnknownSource`
+
+Means there is no existing table and no other query with the name, for a `source` of a query. The status code is `404`.
+
+### `CyclicSource`
+
+Means there is any circular reference of sources. The status code is `400`.
+
+### `SearchTimeout`
+
+Means the engine couldn't finish to process the request in the time specified as `timeout`. The status code is `500`.

  Added: reference/1.1.0/commands/select/index.md (+129 -0) 100644
===================================================================
--- /dev/null
+++ reference/1.1.0/commands/select/index.md    2014-11-30 23:20:40 +0900 (b032466)
@@ -0,0 +1,129 @@
+---
+title: select
+layout: en
+---
+
+* TOC
+{:toc}
+
+## Abstract {#abstract}
+
+The `select` command finds records from the specified table based on given conditions, and returns found records.
+
+This is compatible to [the `select` command of the Groonga](http://groonga.org/docs/reference/commands/select.html).
+
+## API types {#api-types}
+
+### HTTP {#api-types-http}
+
+Request endpoint
+: `(Document Root)/d/select`
+
+Request methd
+: `GET`
+
+Request URL parameters
+: Same to the list of [parameters](#parameters).
+
+Request body
+: Nothing.
+
+Response body
+: A [response message](#response).
+
+### REST {#api-types-rest}
+
+Not supported.
+
+### Fluentd {#api-types-fluentd}
+
+Style
+: Request-Response. One response message is always returned per one request.
+
+`type` of the request
+: `select`
+
+`body` of the request
+: A hash of [parameters](#parameters).
+
+`type` of the response
+: `select.result`
+
+## Parameter syntax {#syntax}
+
+    {
+      "table"            : "<Name of the table>",
+      "match_columns"    : "<List of matching columns, separated by '||'>",
+      "query"            : "<Simple search conditions>",
+      "filter"           : "<Complex search conditions>",
+      "scorer"           : "<An expression to be applied to matched records>",
+      "sortby"           : "<List of sorting columns, separated by ','>",
+      "output_columns"   : "<List of returned columns, separated by ','>",
+      "offset"           : <Offset of paging>,
+      "limit"            : <Number of records to be returned>,
+      "drilldown"        : "<Column name to be drilldown-ed>",
+      "drilldown_sortby" : "List of sorting columns for drilldown's result, separated by ','>",
+      "drilldown_output_columns" :
+                           "List of returned columns for drilldown's result, separated by ','>",
+      "drilldown_offset" : <Offset of drilldown's paging>,
+      "drilldown_limit"  : <Number of drilldown results to be returned>,
+      "cache"            : "<Query cache option>",
+      "match_escalation_threshold":
+                           <Threshold to escalate search methods>,
+      "query_flags"      : "<Flags to customize query parameters>",
+      "query_expander"   : "<Arguments to expanding queries>"
+    }
+
+## Parameter details {#parameters}
+
+All parameters except `table` are optional.
+
+On the version {{ site.droonga_version }}, only following parameters are available. Others are simply ignored because they are not implemented.
+
+ * `table`
+ * `match_columns`
+ * `query`
+ * `query_flags`
+ * `filter`
+ * `output_columns`
+ * `offset`
+ * `limit`
+ * `drilldown`
+ * `drilldown_output_columns`
+ * `drilldown_sortby`
+ * `drilldown_offset`
+ * `drilldown_limit`
+
+All parameters are compatible to [parameters for `select` command of the Groonga](http://groonga.org/docs/reference/commands/select.html#parameters). See the linked document for more details.
+
+## Responses {#response}
+
+This returns an array including search results as the response's `body`.
+
+    [
+      [
+        <Groonga's status code>,
+        <Start time>,
+        <Elapsed time>
+      ],
+      <List of columns>
+    ]
+
+The structure of the returned array is compatible to [the returned value of the Groonga's `select` command](http://groonga.org/docs/reference/commands/select.html#id6). See the linked document for more details.
+
+This command always returns a response with `200` as its `statusCode`, because this is a Groonga compatible command and errors of this command must be handled in the way same to Groonga's one.
+
+Response body's details:
+
+Status code
+: An integer which means the operation's result. Possible values are:
+  
+   * `0` (`Droonga::GroongaHandler::Status::SUCCESS`) : Successfully processed.
+   * `-22` (`Droonga::GroongaHandler::Status::INVALID_ARGUMENT`) : There is any invalid argument.
+
+Start time
+: An UNIX time which the operation was started on.
+
+Elapsed time
+: A decimal of seconds meaning the elapsed time for the operation.
+

  Added: reference/1.1.0/commands/system/index.md (+9 -0) 100644
===================================================================
--- /dev/null
+++ reference/1.1.0/commands/system/index.md    2014-11-30 23:20:40 +0900 (b2b46f3)
@@ -0,0 +1,9 @@
+---
+title: system
+layout: en
+---
+
+`system` is a namespace for commands to report system information of the cluster.
+
+ * [system.status](status/): Reports status information of the cluster
+

  Added: reference/1.1.0/commands/system/status/index.md (+106 -0) 100644
===================================================================
--- /dev/null
+++ reference/1.1.0/commands/system/status/index.md    2014-11-30 23:20:40 +0900 (8c95712)
@@ -0,0 +1,106 @@
+---
+title: system.status
+layout: en
+---
+
+* TOC
+{:toc}
+
+## Abstract {#abstract}
+
+The `system.status` command reports current status of the clsuter itself.
+
+## API types {#api-types}
+
+### HTTP {#api-types-http}
+
+Request endpoint
+: `(Document Root)/droonga/system/status`
+
+Request methd
+: `GET`
+
+Request URL parameters
+: Nothing.
+
+Request body
+: Nothing.
+
+Response body
+: A [response message](#response).
+
+### REST {#api-types-rest}
+
+Not supported.
+
+### Fluentd {#api-types-fluentd}
+
+Style
+: Request-Response. One response message is always returned per one request.
+
+`type` of the request
+: `system.status`
+
+`body` of the request
+: Nothing.
+
+`type` of the response
+: `system.status.result`
+
+## Parameter syntax {#syntax}
+
+This command has no parameter.
+
+## Usage {#usage}
+
+This command reports the list of nodes and their vital information.
+For example:
+
+    {
+      "type" : "system.status",
+      "body" : {}
+    }
+    
+    => {
+         "type" : "system.status.result",
+         "body" : {
+           "nodes": {
+             "192.168.0.10:10031/droonga": {
+               "live": true
+             },
+             "192.168.0.11:10031/droonga": {
+               "live": false
+             }
+           }
+         }
+       }
+
+
+## Responses {#response}
+
+This returns a hash like following as the response's `body`, with `200` as its `statusCode`.
+
+    {
+      "nodes" : {
+        "<Identifier of the node 1>" : {
+          "live" : <Vital status of the node>
+        },
+        "<Identifier of the node 2>" : { ... },
+        ...
+      }
+    }
+
+`nodes`
+: A hash including information of nodes in the cluster.
+  Keys of the hash are identifiers of nodes defined in the `catalog.json`, with the format: `hostname:port/tag`.
+  Each value indicates status information of corresponding node, and have following information:
+  
+  `live`
+  : A boolean value indicating vital state of the node.
+    If `true`, the node can process messages, and messages are delivered to it.
+    Otherwise, the node doesn't process any message for now, because it is down or some reasons.
+
+
+## Error types {#errors}
+
+This command reports [general errors](/reference/message/#error).

  Added: reference/1.1.0/commands/table-create/index.md (+102 -0) 100644
===================================================================
--- /dev/null
+++ reference/1.1.0/commands/table-create/index.md    2014-11-30 23:20:40 +0900 (2d96f48)
@@ -0,0 +1,102 @@
+---
+title: table_create
+layout: en
+---
+
+* TOC
+{:toc}
+
+## Abstract {#abstract}
+
+The `table_create` command creates a new table.
+
+This is compatible to [the `table_create` command of the Groonga](http://groonga.org/docs/reference/commands/table_create.html).
+
+## API types {#api-types}
+
+### HTTP {#api-types-http}
+
+Request endpoint
+: `(Document Root)/d/table_create`
+
+Request methd
+: `GET`
+
+Request URL parameters
+: Same to the list of [parameters](#parameters).
+
+Request body
+: Nothing.
+
+Response body
+: A [response message](#response).
+
+### REST {#api-types-rest}
+
+Not supported.
+
+### Fluentd {#api-types-fluentd}
+
+Style
+: Request-Response. One response message is always returned per one request.
+
+`type` of the request
+: `table_create`
+
+`body` of the request
+: A hash of [parameters](#parameters).
+
+`type` of the response
+: `table_create.result`
+
+## Parameter syntax {#syntax}
+
+    {
+      "name"              : "<Name of the table>",
+      "flags"             : "<Flags for the table>",
+      "key_type"          : "<Type of the primary key>",
+      "value_type"        : "<Type of the value>",
+      "default_tokenizer" : "<Default tokenizer>",
+      "normalizer"        : "<Normalizer>"
+    }
+
+## Parameter details {#parameters}
+
+All parameters except `name` are optional.
+
+They are compatible to [the parameters of the `table_create` command of the Groonga](http://groonga.org/docs/reference/commands/table_create.html#parameters). See the linked document for more details.
+
+## Responses {#response}
+
+This returns an array meaning the result of the operation, as the `body`.
+
+    [
+      [
+        <Groonga's status code>,
+        <Start time>,
+        <Elapsed time>
+      ],
+      <Table is successfully created or not>
+    ]
+
+This command always returns a response with `200` as its `statusCode`, because this is a Groonga compatible command and errors of this command must be handled in the way same to Groonga's one.
+
+Response body's details:
+
+Status code
+: An integer which means the operation's result. Possible values are:
+  
+   * `0` (`Droonga::GroongaHandler::Status::SUCCESS`) : Successfully processed.
+   * `-22` (`Droonga::GroongaHandler::Status::INVALID_ARGUMENT`) : There is any invalid argument.
+
+Start time
+: An UNIX time which the operation was started on.
+
+Elapsed time
+: A decimal of seconds meaning the elapsed time for the operation.
+
+Table is successfully created or not
+: A boolean value meaning the table was successfully created or not. Possible values are:
+  
+   * `true`:The table was successfully created.
+   * `false`:The table was not created.

  Added: reference/1.1.0/commands/table-list/index.md (+82 -0) 100644
===================================================================
--- /dev/null
+++ reference/1.1.0/commands/table-list/index.md    2014-11-30 23:20:40 +0900 (3c97f08)
@@ -0,0 +1,82 @@
+---
+title: table_list
+layout: en
+---
+
+* TOC
+{:toc}
+
+## Abstract {#abstract}
+
+The `table_list` command reports the list of all existing tables in the dataset.
+
+This is compatible to [the `table_list` command of the Groonga](http://groonga.org/docs/reference/commands/table_list.html).
+
+## API types {#api-types}
+
+### HTTP {#api-types-http}
+
+Request endpoint
+: `(Document Root)/d/table_list`
+
+Request methd
+: `GET`
+
+Request URL parameters
+: Nothing.
+
+Request body
+: Nothing.
+
+Response body
+: A [response message](#response).
+
+### REST {#api-types-rest}
+
+Not supported.
+
+### Fluentd {#api-types-fluentd}
+
+Style
+: Request-Response. One response message is always returned per one request.
+
+`type` of the request
+: `table_list`
+
+`body` of the request
+: `null` or a blank hash.
+
+`type` of the response
+: `table_list.result`
+
+## Responses {#response}
+
+This returns an array including list of tables as the response's `body`.
+
+    [
+      [
+        <Groonga's status code>,
+        <Start time>,
+        <Elapsed time>
+      ],
+      <List of tables>
+    ]
+
+The structure of the returned array is compatible to [the returned value of the Groonga's `table_list` command](http://groonga.org/docs/reference/commands/table_list.html#id5). See the linked document for more details.
+
+This command always returns a response with `200` as its `statusCode`, because this is a Groonga compatible command and errors of this command must be handled in the way same to Groonga's one.
+
+Response body's details:
+
+Status code
+: An integer which means the operation's result. Possible values are:
+  
+   * `0` (`Droonga::GroongaHandler::Status::SUCCESS`) : Successfully processed.
+   * `-22` (`Droonga::GroongaHandler::Status::INVALID_ARGUMENT`) : There is any invalid argument.
+
+Start time
+: An UNIX time which the operation was started on.
+
+Elapsed time
+: A decimal of seconds meaning the elapsed time for the operation.
+

  Added: reference/1.1.0/commands/table-remove/index.md (+97 -0) 100644
===================================================================
--- /dev/null
+++ reference/1.1.0/commands/table-remove/index.md    2014-11-30 23:20:40 +0900 (d953a59)
@@ -0,0 +1,97 @@
+---
+title: table_remove
+layout: en
+---
+
+* TOC
+{:toc}
+
+## Abstract {#abstract}
+
+The `table_remove` command removes an existing table.
+
+This is compatible to [the `table_remove` command of the Groonga](http://groonga.org/docs/reference/commands/table_remove.html).
+
+## API types {#api-types}
+
+### HTTP {#api-types-http}
+
+Request endpoint
+: `(Document Root)/d/table_remove`
+
+Request methd
+: `GET`
+
+Request URL parameters
+: Same to the list of [parameters](#parameters).
+
+Request body
+: Nothing.
+
+Response body
+: A [response message](#response).
+
+### REST {#api-types-rest}
+
+Not supported.
+
+### Fluentd {#api-types-fluentd}
+
+Style
+: Request-Response. One response message is always returned per one request.
+
+`type` of the request
+: `table_remove`
+
+`body` of the request
+: A hash of [parameters](#parameters).
+
+`type` of the response
+: `table_remove.result`
+
+## Parameter syntax {#syntax}
+
+    {
+      "name" : "<Name of the table>"
+    }
+
+## Parameter details {#parameters}
+
+The only one parameter `name` is required.
+
+They are compatible to [the parameters of the `table_remove` command of the Groonga](http://groonga.org/docs/reference/commands/table_remove.html#parameters). See the linked document for more details.
+
+## Responses {#response}
+
+This returns an array meaning the result of the operation, as the `body`.
+
+    [
+      [
+        <Groonga's status code>,
+        <Start time>,
+        <Elapsed time>
+      ],
+      <Table is successfully removed or not>
+    ]
+
+This command always returns a response with `200` as its `statusCode`, because this is a Groonga compatible command and errors of this command must be handled in the way same to Groonga's one.
+
+Response body's details:
+
+Status code
+: An integer which means the operation's result. Possible values are:
+  
+   * `0` (`Droonga::GroongaHandler::Status::SUCCESS`) : Successfully processed.
+   * `-22` (`Droonga::GroongaHandler::Status::INVALID_ARGUMENT`) : There is any invalid argument.
+
+Start time
+: An UNIX time which the operation was started on.
+
+Elapsed time
+: A decimal of seconds meaning the elapsed time for the operation.
+
+Table is successfully removed or not
+: A boolean value meaning the table was successfully removed or not. Possible values are:
+  
+   * `true`:The table was successfully removed.
+   * `false`:The table was not removed.

  Added: reference/1.1.0/http-server/index.md (+156 -0) 100644
===================================================================
--- /dev/null
+++ reference/1.1.0/http-server/index.md    2014-11-30 23:20:40 +0900 (986ee8f)
@@ -0,0 +1,156 @@
+---
+title: HTTP Server
+layout: en
+---
+
+* TOC
+{:toc}
+
+## Abstract {#abstract}
+
+The [Droonga HTTP Server][droonga-http-server] is as an HTTP protocol adapter for the Droonga Engine.
+
+The Droonga Engine supports only the fluentd protocol, so you have to use `fluent-cat` or something, to communicate with the Drooga Engine.
+This application provides ability to communicate with the Droonga Engine via HTTP.
+
+## Install {#install}
+
+It is released as the [droonga-http-server npm module][], a [Node.js][] module package.
+You can install it via the `npm` command, like:
+
+    # npm install -g droonga-http-server
+
+## Usage {#usage}
+
+### Command line options {#usage-command}
+
+It includes a command `droonga-http-server` to start an HTTP server.
+You can start it with command line options, like:
+
+    # droonga-http-server --port 3003
+
+Available options and their default values are:
+
+`--port <13000>`
+: The port number which the server receives HTTP requests at.
+
+`--receive-host-name <127.0.0.1>`
+: The host name (or the IP address) of the computer itself which the server is running.
+  It is used by the Droonga Engine, to send response messages to the protocol adapter.
+
+`--droonga-engine-host-name <127.0.0.1>`
+: The host name (or the IP address) of the computer which the Droonga Engine is running on.
+
+`--droonga-engine-port <24224>`
+: The port number which the Droonga Engine receives messages at.
+
+`--default-dataset <Droonga>`
+: The name of the default dataset.
+  It is used for requests triggered via built-in HTTP APIs.
+
+`--tag <droonga>`
+: The tag used for fluentd messages sent to the Droonga Engine.
+
+`--enable-logging`
+: If you specify this option, log messages are printed to the standard output.
+
+`--cache-size <100>`
+: The maximum size of the LRU response cache.
+  Droonga HTTP server caches all responses for GET requests on the RAM, unthil this size.
+
+You have to specify appropriate values for your Droonga Engine. For example, if the HTTP server is running on the host 192.168.10.90 and the Droonga engine is running on the host 192.168.10.100 with following configurations:
+
+fluentd.conf:
+
+    <source>
+      type forward
+      port 24324
+    </source>
+    <match books.message>
+      name localhost:24224/books
+      type droonga
+    </match>
+    <match output.message>
+      type stdout
+    </match>
+
+catalog.json:
+
+    {
+      "version": 2,
+      "effectiveDate": "2013-09-01T00:00:00Z",
+      "datasets": {
+        "Books": {
+          ...
+        }
+      }
+    }
+
+Then, you'll start the HTTP server on the host 192.168.10.90, with options like:
+
+    # droonga-http-server --receive-host-name 192.168.10.90 \
+                          --droonga-engine-host-name 192.168.10.100 \
+                          --droonga-engine-port 24324 \
+                          --default-dataset Books \
+                          --tag books
+
+See also the [basic tutorial][].
+
+## Built-in APIs {#usage-api}
+
+The Droonga HTTP Server includes following APIs:
+
+### REST API {#usage-rest}
+
+#### `GET /tables/<table name>` {#usage-rest-get-tables-table}
+
+This emits a simple [search request](../commands/search/).
+The [`source`](../commands/search/#query-source) is filled by the table name in the path.
+Available query parameters are:
+
+`attributes`
+: Corresponds to [`output.attributes`](../commands/search/#query-output).
+  The value is a comma-separated list, like: `attributes=_key,name,age`.
+
+`query`
+: Corresponds to [`condition.*.query`](../commands/search/#query-condition-query-syntax-hash).
+  The vlaue is a query string.
+
+`match_to`
+: Corresponds to [`condition.*.matchTo`](../commands/search/#query-condition-query-syntax-hash).
+  The vlaue is an comma-separated list, like: `match_to=_key,name`.
+
+`match_escalation_threshold`
+: Corresponds to [`condition.*.matchEscalationThreshold`](../commands/search/#query-condition-query-syntax-hash).
+  The vlaue is an integer.
+
+`script`
+: Corresponds to [`condition`](../commands/search/#query-condition-query-syntax-hash) in the script syntax.
+  If you specity both `query` and `script`, then they work with an `and` logical condition.
+
+`adjusters`
+: Corresponds to `adjusters`.
+
+`sort_by`
+: Corresponds to [`sortBy`](../commands/search/#query-sortBy).
+  The value is a column name string.
+
+`limit`
+: Corresponds to [`output.limit`](../commands/search/#query-output).
+  The value is an integer.
+
+`offset`
+: Corresponds to [`output.offset`](../commands/search/#query-output).
+  The value is an integer.
+
+### Groonga HTTP server compatible API {#usage-groonga}
+
+#### `GET /d/<command name>` {#usage-groonga-d}
+
+(TBD)
+
+
+  [basic tutorial]: ../../tutorial/basic/
+  [droonga-http-server]: https://github.com/droonga/droonga-http-server
+  [droonga-http-server npm module]: https://npmjs.org/package/droonga-http-server
+  [Node.js]: http://nodejs.org/

  Added: reference/1.1.0/index.md (+19 -0) 100644
===================================================================
--- /dev/null
+++ reference/1.1.0/index.md    2014-11-30 23:20:40 +0900 (0a37859)
@@ -0,0 +1,19 @@
+---
+title: Reference manuals
+layout: en
+---
+
+[Catalog](catalog/)
+: Describes details of `catalog.json` which defines behavior of the Droonga Engine.
+
+[Message format](message/)
+: Describes details of message format flowing in the Droonga Engines.
+
+[Commands](commands/)
+: Describes details of built-in commands available on the Droonga Engines.
+
+[HTTP Server](http-server/)
+: Describes usage of the [droonga-http-server](https://github.com/droonga/droonga-http-server).
+
+[Plugin development](plugin/)
+: Describes details of public APIs to develop custom plugins for the Droonga Engine.

  Added: reference/1.1.0/message/index.md (+206 -0) 100644
===================================================================
--- /dev/null
+++ reference/1.1.0/message/index.md    2014-11-30 23:20:40 +0900 (bce02ba)
@@ -0,0 +1,206 @@
+---
+title: Message format
+layout: en
+---
+
+* TOC
+{:toc}
+
+
+## Request {#request}
+
+The basic format of a request message is like following:
+
+    {
+      "id"      : "<ID of the message>",
+      "type"    : "<Type of the message>",
+      "replyTo" : "<Route to the receiver>",
+      "dataset" : "<Name of the target dataset>",
+      "body"    : <Body of the message>
+    }
+
+### `id` {#request-id}
+
+Abstract
+: The unique identifier for the message.
+
+Value
+: An identifier string. You can use any string with any format as you like, if only it is unique. The given id of a request message will be used for the ['inReplyTo`](#response-inReplyTo) information of its response.
+
+Default value
+: Nothing. This is required information.
+
+### `type` {#request-type}
+
+Abstract
+: The type of the message.
+
+Value
+: A type string of [a command](/reference/commands/).
+
+Default value
+: Nothing. This is required information.
+
+### `replyTo` {#request-replyTo}
+
+Abstract
+: The route to the response receiver.
+
+Value
+: An path string in the format: `<hostname>:<port>/<tag>`, for example: `localhost:24224/output`.
+
+Default value
+: Nothing. This is optional. If you specify no `replyTo`, then the response message will be thrown away.
+
+### `dataset` {#request-dataset}
+
+Abstract
+: The target dataset.
+
+Value
+: A name string of a dataset.
+
+Default value
+: Nothing. This is required information.
+
+### `body` {#request-body}
+
+Abstract
+: The body of the message.
+
+Value
+: Object, string, number, boolean, or `null`.
+
+Default value
+: Nothing. This is optional.
+
+## Response {#response}
+
+The basic format of a response message is like following:
+
+    {
+      "type"       : "<Type of the message>",
+      "inReplyTo"  : "<ID of the related request message>",
+      "statusCode" : <Status code>,
+      "body"       : <Body of the message>,
+      "errors"     : <Errors from nodes>
+    }
+
+### `type` {#response-type}
+
+Abstract
+: The type of the message.
+
+Value
+: A type string. Generally it is a suffixed version of the type string of the request message, with the suffix ".result".
+
+### `inReplyTo` {#response-inReplyTo}
+
+Abstract
+: The identifier of the related request message.
+
+Value
+: An identifier string of the related request message.
+
+### `statusCode` {#response-statusCode}
+
+Abstract
+: The result status for the request message.
+
+Value
+: A status code integer.
+
+Status codes of responses are similar to HTTP's one. Possible values:
+
+`200` and other `2xx` statuses
+: The command is successfully processed.
+
+### `body` {#response-body}
+
+Abstract
+: The result information for the request message.
+
+Value
+: Object, string, number, boolean, or `null`.
+
+### `errors` {#response-errors}
+
+Abstract
+: All errors from nodes.
+
+Value
+: Object.
+
+This information will appear only when the command is distributed to multiple volumes and they returned errors. Otherwise, the response message will have no `errors` field. For more details, see [the "Error response" section](#error).
+
+## Error response {#error}
+
+Some commands can return an error response.
+
+An error response has the `type` same to a regular response, but it has different `statusCode` and `body`. General type of the error is indicated by the `statusCode`, and details are reported as the `body`.
+
+If a command is distributed to multiple volumes and they return errors, then the response message will have an `error` field. All errors from all nodes are stored to the field, like:
+
+    {
+      "type"       : "add.result",
+      "inReplyTo"  : "...",
+      "statusCode" : 400,
+      "body"       : {
+        "name":    "UnknownTable",
+        "message": ...
+      },
+      "errors"     : {
+        "/path/to/the/node1" : {
+          "statusCode" : 400,
+          "body"       : {
+            "name":    "UnknownTable",
+            "message": ...
+          }
+        },
+        "/path/to/the/node2" : {
+          "statusCode" : 400,
+          "body"       : {
+            "name":    "UnknownTable",
+            "message": ...
+          }
+        }
+      }
+    }
+
+In this case, one of all errors will be exported as the main message `body`, as a representative.
+
+
+### Status codes of error responses {#error-status}
+
+Status codes of error responses are similar to HTTP's one. Possible values:
+
+`400` and other `4xx` statuses
+: An error of the request message.
+
+`500` and other `5xx` statuses
+: An internal error of the Droonga Engine.
+
+### Body of error responses {#error-body}
+
+The basic format of the body of an error response is like following:
+
+    {
+      "name"    : "<Type of the error>",
+      "message" : "<Human readable details of the error>",
+      "detail"  : <Other extra information for the error, in various formats>
+    }
+
+If there is no detail, `detial` can be missing.
+
+#### Error types {#error-type}
+
+There are some general error types for any command.
+
+`MissingDatasetParameter`
+: Means you've forgotten to specify the `dataset`. The status code is `400`.
+
+`UnknownDataset`
+: Means you've specified a dataset which is not existing. The status code is `404`.
+
+`UnknownType`
+: Means there is no handler for the command given as the `type`. The status code is `400`.

  Added: reference/1.1.0/plugin/adapter/index.md (+308 -0) 100644
===================================================================
--- /dev/null
+++ reference/1.1.0/plugin/adapter/index.md    2014-11-30 23:20:40 +0900 (124ee7a)
@@ -0,0 +1,308 @@
+---
+title: API set for plugins on the adaption phase
+layout: en
+---
+
+* TOC
+{:toc}
+
+
+## Abstract {#abstract}
+
+Each Droonga Engine plugin can have its *adapter*. On the adaption phase, adapters can modify both incoming messages (from the Protocol Adapter to the Droonga Engine, in other words, they are "request"s) and outgoing messages (from the Droonga Engine to the Protocol Adapter, in other words, they are "response"s).
+
+
+### How to define an adapter? {#howto-define}
+
+For example, here is a sample plugin named "foo" with an adapter:
+
+~~~ruby
+require "droonga/plugin"
+
+module Droonga::Plugins::FooPlugin
+  extend Plugin
+  register("foo")
+
+  class Adapter < Droonga::Adapter
+    # operations to configure this adapter
+    XXXXXX = XXXXXX
+
+    def adapt_input(input_message)
+      # operations to modify incoming messages
+      input_message.XXXXXX = XXXXXX
+    end
+
+    def adapt_output(output_message)
+      # operations to modify outgoing messages
+      output_message.XXXXXX = XXXXXX
+    end
+  end
+end
+~~~
+
+Steps to define an adapter:
+
+ 1. Define a module for your plugin (ex. `Droonga::Plugins::FooPlugin`) and register it as a plugin. (required)
+ 2. Define an adapter class (ex. `Droonga::Plugins::FooPlugin::Adapter`) inheriting [`Droonga::Adapter`](#classes-Droonga-Adapter). (required)
+ 3. [Configure conditions to apply the adapter](#howto-configure). (required)
+ 4. Define adaption logic for incoming messages as [`#adapt_input`](#classes-Droonga-Adapter-adapt_input). (optional)
+ 5. Define adaption logic for outgoing messages as [`#adapt_output`](#classes-Droonga-Adapter-adapt_output). (optional)
+
+See also the [plugin development tutorial](../../../tutorial/plugin-development/adapter/).
+
+
+### How an adapter works? {#how-works}
+
+An adapter works like following:
+
+ 1. The Droonga Engine starts.
+    * A global instance of the adapter class (ex. `Droonga::Plugins::FooPlugin::Adapter`) is created and it is registered.
+      * The input pattern and the output pattern are registered.
+    * The Droonga Engine starts to wait for incoming messages.
+ 2. An incoming message is transferred from the Protocol Adapter to the Droonga Engine.
+    Then, the adaption phase (for an incoming message) starts.
+    * The adapter's [`#adapt_input`](#classes-Droonga-Adapter-adapt_input) is called, if the message matches to the [input matching pattern](#config) of the adapter.
+    * The method can modify the given incoming message, via [its methods](#classes-Droonga-InputMessage).
+ 3. After all adapters are applied, the adaption phase for an incoming message ends, and the message is transferred to the next "planning" phase.
+ 4. An outgoing message returns from the previous "collection" phase.
+    Then, the adaption phase (for an outgoing message) starts.
+    * The adapter's [`#adapt_output`](#classes-Droonga-Adapter-adapt_output) is called, if the message meets following both requirements:
+      - It is originated from an incoming message which was processed by the adapter itself.
+      - It matches to the [output matching pattern](#config) of the adapter.
+    * The method can modify the given outgoing message, via [its methods](#classes-Droonga-OutputMessage).
+ 5. After all adapters are applied, the adaption phase for an outgoing message ends, and the outgoing message is transferred to the Protocol Adapter.
+
+As described above, the Droonga Engine creates only one global instance of the adapter class for each plugin.
+You should not keep stateful information for a pair of incoming and outgoing messages as instance variables of the adapter itself.
+Instead, you should give stateful information as a part of the incoming message body, and receive it from the body of the corresponding outgoing message.
+
+Any error raised from the adapter is handled by the Droonga Engine itself. See also [error handling][].
+
+
+## Configurations {#config}
+
+`input_message.pattern` ([matching pattern][], optional, default=`nil`)
+: A [matching pattern][] for incoming messages.
+  If no pattern (`nil`) is given, any message is regarded as "matched".
+
+`output_message.pattern` ([matching pattern][], optional, default=`nil`)
+: A [matching pattern][] for outgoing messages.
+  If no pattern (`nil`) is given, any message is regarded as "matched".
+
+## Classes and methods {#classes}
+
+### `Droonga::Adapter` {#classes-Droonga-Adapter}
+
+This is the common base class of any adapter. Your plugin's adapter class must inherit this.
+
+#### `#adapt_input(input_message)` {#classes-Droonga-Adapter-adapt_input}
+
+This method receives a [`Droonga::InputMessage`](#classes-Droonga-InputMessage) wrapped incoming message.
+You can modify the incoming message via its methods.
+
+In this base class, this method is defined as just a placeholder and it does nothing.
+To modify incoming messages, you have to override it by yours, like following:
+
+~~~ruby
+module Droonga::Plugins::QueryFixer
+  class Adapter < Droonga::Adapter
+    def adapt_input(input_message)
+      input_message.body["query"] = "fixed query"
+    end
+  end
+end
+~~~
+
+#### `#adapt_output(output_message)` {#classes-Droonga-Adapter-adapt_output}
+
+This method receives a [`Droonga::OutputMessage`](#classes-Droonga-OutputMessage) wrapped outgoing message.
+You can modify the outgoing message via its methods.
+
+In this base class, this method is defined as just a placeholder and it does nothing.
+To modify outgoing messages, you have to override it by yours, like following:
+
+~~~ruby
+module Droonga::Plugins::ErrorConcealer
+  class Adapter < Droonga::Adapter
+    def adapt_output(output_message)
+      output_message.status_code = Droonga::StatusCode::OK
+    end
+  end
+end
+~~~
+
+### `Droonga::InputMessage` {#classes-Droonga-InputMessage}
+
+#### `#type`, `#type=(type)` {#classes-Droonga-InputMessage-type}
+
+This returns the `"type"` of the incoming message.
+
+You can override it by assigning a new string value, like:
+
+~~~ruby
+module Droonga::Plugins::MySearch
+  class Adapter < Droonga::Adapter
+    input_message.pattern = ["type", :equal, "my-search"]
+
+    def adapt_input(input_message)
+      p input_message.type
+      # => "my-search"
+      #    This message will be handled by a plugin
+      #    for the custom "my-search" type.
+
+      input_message.type = "search"
+
+      p input_message.type
+      # => "search"
+      #    The messge type (type) is changed.
+      #    This message will be handled by the "search" plugin,
+      #    as a regular search request.
+    end
+  end
+end
+~~~
+
+#### `#body`, `#body=(body)` {#classes-Droonga-InputMessage-body}
+
+This returns the `"body"` of the incoming message.
+
+You can override it by assigning a new value, partially or fully. For example:
+
+~~~ruby
+module Droonga::Plugins::MinimumLimit
+  class Adapter < Droonga::Adapter
+    input_message.pattern = ["type", :equal, "search"]
+
+    MAXIMUM_LIMIT = 10
+
+    def adapt_input(input_message)
+      input_message.body["queries"].each do |name, query|
+        query["output"] ||= {}
+        query["output"]["limit"] ||= MAXIMUM_LIMIT
+        query["output"]["limit"] = [query["output"]["limit"], MAXIMUM_LIMIT].min
+      end
+      # Now, all queries have "output.limit=10".
+    end
+  end
+end
+~~~
+
+Another case:
+
+~~~ruby
+module Droonga::Plugins::MySearch
+  class Adapter < Droonga::Adapter
+    input_message.pattern = ["type", :equal, "my-search"]
+
+    def adapt_input(input_message)
+      # Extract the query string from the custom type message.
+      query_string = input_message["body"]["query"]
+
+      # Construct internal search request for the "search" type.
+      input_message.type = "search"
+      input_message.body = {
+        "queries" => {
+          "source"    => "Store",
+          "condition" => {
+            "query"   => query_string,
+            "matchTo" => ["name"],
+          },
+          "output" => {
+            "elements" => ["records"],
+            "limit"    => 10,
+          },
+        },
+      }
+      # Now, both "type" and "body" are completely replaced.
+    end
+  end
+end
+~~~
+
+### `Droonga::OutputMessage` {#classes-Droonga-OutputMessage}
+
+#### `#status_code`, `#status_code=(status_code)` {#classes-Droonga-OutputMessage-status_code}
+
+This returns the `"statusCode"` of the outgoing message.
+
+You can override it by assigning a new status code. For example: 
+
+~~~ruby
+module Droonga::Plugins::ErrorConcealer
+  class Adapter < Droonga::Adapter
+    input_message.pattern = ["type", :equal, "search"]
+
+    def adapt_output(output_message)
+      unless output_message.status_code == StatusCode::InternalServerError
+        output_message.status_code = Droonga::StatusCode::OK
+        output_message.body = {}
+        output_message.errors = nil
+        # Now any internal server error is ignored and clients
+        # receive regular responses.
+      end
+    end
+  end
+end
+~~~
+
+#### `#errors`, `#errors=(errors)` {#classes-Droonga-OutputMessage-errors}
+
+This returns the `"errors"` of the outgoing message.
+
+You can override it by assigning new error information, partially or fully. For example:
+
+~~~ruby
+module Droonga::Plugins::ErrorExporter
+  class Adapter < Droonga::Adapter
+    input_message.pattern = ["type", :equal, "search"]
+
+    def adapt_output(output_message)
+      output_message.errors.delete(secret_database)
+      # Delete error information from secret database
+
+      output_message.body["errors"] = {
+        "records" => output_message.errors.collect do |database, error|
+          {
+            "database" => database,
+            "error" => error
+          }
+        end,
+      }
+      # Convert error informations to a fake search result named "errors".
+    end
+  end
+end
+~~~
+
+#### `#body`, `#body=(body)` {#classes-Droonga-OutputMessage-body}
+
+This returns the `"body"` of the outgoing message.
+
+You can override it by assigning a new value, partially or fully. For example:
+
+~~~ruby
+module Droonga::Plugins::SponsoredSearch
+  class Adapter < Droonga::Adapter
+    input_message.pattern = ["type", :equal, "search"]
+
+    def adapt_output(output_message)
+      output_message.body.each do |name, result|
+        next unless result["records"]
+        result["records"].unshift(sponsored_entry)
+      end
+      # Now all search results include sponsored entry.
+    end
+
+    def sponsored_entry
+      {
+        "title"=> "SALE!",
+        "url"=>   "http://..."
+      }
+    end
+  end
+end
+~~~
+
+
+  [matching pattern]: ../matching-pattern/
+  [error handling]: ../error/

  Added: reference/1.1.0/plugin/collector/index.md (+51 -0) 100644
===================================================================
--- /dev/null
+++ reference/1.1.0/plugin/collector/index.md    2014-11-30 23:20:40 +0900 (f065268)
@@ -0,0 +1,51 @@
+---
+title: Collector
+layout: en
+---
+
+* TOC
+{:toc}
+
+
+## Abstract {#abstract}
+
+A collector merges two input values to single value.
+The Droonga Engine tries to collect three or more values by applying the specified collector for two of them again and again.
+
+## Built-in collector classes {#builtin-collectors}
+
+There are some pre-defined collector classes used by built-in plugins.
+Of course they are available for your custom plugins.
+
+### `Droonga::Collectors::And`
+
+Returns a result from comparison of two values by the `and` logical operator.
+If both values are logically equal to `true`, then one of them (it is indeterminate) becomes the result.
+
+Values `null` (`nil`) and `false` are treated as `false`.
+Otherwise `true`.
+
+### `Droonga::Collectors::Or`
+
+Returns a result from comparison of two values by the `or` logical operator.
+If only one of them is logically equal to `true`, then the value becomes the result.
+Otherwise, if values are logically same, one of them (it is indeterminate) becomes the result.
+
+Values `null` (`nil`) and `false` are treated as `false`.
+Otherwise `true`.
+
+### `Droonga::Collectors::Sum`
+
+Returns a summarized value of two input values.
+
+This collector works a little complicatedly.
+
+ * If one of values is equal to `null` (`nil`), then the other value becomes the result.
+ * If both values are hash, then a merged hash becomes the result.
+   * The result hash has all keys of two hashes.
+     If both have same keys, then one of their values appears as the value of the key in the reuslt hash.
+   * It is indeterminate which value becomes the base.
+ * Otherwise the result of `a + b` becomes the result.
+   * If they are arrays or strings, a concatenated value becomes the result.
+     It is indeterminate which value becomes the lefthand.
+

  Added: reference/1.1.0/plugin/error/index.md (+61 -0) 100644
===================================================================
--- /dev/null
+++ reference/1.1.0/plugin/error/index.md    2014-11-30 23:20:40 +0900 (8e4bf9e)
@@ -0,0 +1,61 @@
+---
+title: Error handling in plugins
+layout: en
+---
+
+* TOC
+{:toc}
+
+
+## Abstract {#abstract}
+
+Any unhandled error raised from a plugin is returned as an [error response][] for the corresponding incoming message, with the status code `500` (means "internal error").
+
+If you want formatted error information to be returned, then rescue errors and raise your custom errors inheriting `Droonga::ErrorMessage::BadRequest` or `Droonga::ErrorMessage::InternalServerError` instead of raw errors.
+(By the way, they are already included to the base class of plugins so you can define your custom errors easily like: `class CustomError < BadRequest`)
+
+
+## Built-in error classes {#builtin-errors}
+
+There are some pre-defined error classes used by built-in plugins and the Droonga Engine itself.
+
+### `Droonga::ErrorMessage::NotFound`
+
+Means an error which the specified resource is not found in the dataset or any source. For example:
+
+    # the second argument means "details" of the error. (optional)
+    raise Droonga::NotFound.new("#{name} is not found!", :elapsed_time => elapsed_time)
+
+### `Droonga::ErrorMessage::BadRequest`
+
+Means any error originated from the incoming message itself, ex. syntax error, validation error, and so on. For example:
+
+    # the second argument means "details" of the error. (optional)
+    raise Droonga::NotFound.new("Syntax error in #{query}!", :detail => detail)
+
+### `Droonga::ErrorMessage::InternalServerError`
+
+Means other unknown error, ex. timed out, file I/O error, and so on. For example:
+
+    # the second argument means "details" of the error. (optional)
+    raise Droonga::MessageProcessingError.new("busy!", :elapsed_time => elapsed_time)
+
+
+## Built-in status codes {#builtin-status-codes}
+
+You should use following or other status codes as [a matter of principle](../../message/#error-status).
+
+`Droonga::StatusCode::OK`
+: Equals to `200`.
+
+`Droonga::StatusCode::NOT_FOUND`
+: Equals to `404`.
+
+`Droonga::StatusCode::BAD_REQUEST`
+: Equals to `400`.
+
+`Droonga::StatusCode::INTERNAL_ERROR`
+: Equals to `500`.
+
+
+  [error response]: ../../message/#error

  Added: reference/1.1.0/plugin/handler/index.md (+227 -0) 100644
===================================================================
--- /dev/null
+++ reference/1.1.0/plugin/handler/index.md    2014-11-30 23:20:40 +0900 (b5e0a49)
@@ -0,0 +1,227 @@
+---
+title: API set for plugins on the handling phase
+layout: en
+---
+
+* TOC
+{:toc}
+
+
+## Abstract {#abstract}
+
+Each Droonga Engine plugin can have its *handler*.
+On the handling phase, handlers can process a request and return a result.
+
+
+### How to define a handler? {#howto-define}
+
+For example, here is a sample plugin named "foo" with a handler:
+
+~~~ruby
+require "droonga/plugin"
+
+module Droonga::Plugins::FooPlugin
+  extend Plugin
+  register("foo")
+
+  define_single_step do |step|
+    step.name = "foo"
+    step.handler = :Handler
+    step.collector = Collectors::And
+  end
+
+  class Handler < Droonga::Handler
+    def handle(message)
+      # operations to process a request
+    end
+  end
+end
+~~~
+
+Steps to define a handler:
+
+ 1. Define a module for your plugin (ex. `Droonga::Plugins::FooPlugin`) and register it as a plugin. (required)
+ 2. Define a "single step" corresponding to the handler you are going to implement, via [`Droonga::SingleStepDefinition`](#class-Droonga-SingleStepDefinition). (required)
+ 3. Define a handler class (ex. `Droonga::Plugins::FooPlugin::Handler`) inheriting [`Droonga::Handler`](#classes-Droonga-Handler). (required)
+ 4. Define handling logic for requests as [`#handle`](#classes-Droonga-Handler-handle). (optional)
+
+See also the [plugin development tutorial](../../../tutorial/plugin-development/handler/).
+
+
+### How a handler works? {#how-works}
+
+A handler works like following:
+
+ 1. The Droonga Engine starts.
+    * Your custom steps are registered.
+      Your custom handler classes also.
+    * Then the Droonga Engine starts to wait for request messages.
+ 2. A request message is transferred from the adaption phase.
+    Then, the processing phase starts.
+    * The Droonga Engine finds a step definition from the message type.
+    * The Droonga Engine builds a "single step" based on the registered definition.
+    * A "single step" creates an instance of the registered handler class.
+      Then the Droonga Engine enters to the handling phase.
+      * The handler's [`#handle`](#classes-Droonga-Handler-handle) is called with a task massage including the request.
+        * The method can process the given incoming message as you like.
+        * The method returns a result value, as the output.
+      * After the handler finishes, the handling phase for the task message (and the request) ends.
+    * If no "step" is found for the type, nothing happens.
+    * All "step"s finish their task, the processing phase for the request ends.
+
+As described above, the Droonga Engine creates an instance of the handler class for each request.
+
+Any error raised from the handler is handled by the Droonga Engine itself. See also [error handling][].
+
+
+## Configurations {#config}
+
+`action.synchronous` (boolean, optional, default=`false`)
+: Indicates that the request must be processed synchronously.
+  For example, a request to define a new column in a table must be processed after a request to define the table itself, if the table does not exist yet.
+  Then handlers for these requests have the configuration `action.synchronous = true`.
+
+
+## Classes and methods {#classes}
+
+### `Droonga::SingleStepDefinition` {#classes-Droonga-SingleStepDefinition}
+
+This provides methods to describe the "step" corresponding to the handler.
+
+#### `#name`, `#name=(name)` {#classes-Droonga-SingleStepDefinition-name}
+
+Describes the name of the step itself.
+Possible value is a string.
+
+The Droonga Engine treats an incoming message as a request of a "command", if there is any step with the `name` which equals to the message's `type`.
+In other words, this defines the name of the command corresponding to the step itself.
+
+
+#### `#handler`, `#handler=(handler)` {#classes-Droonga-SingleStepDefinition-handler}
+
+Associates a specific handler class to the step itself.
+You can specify the class as any one of following choices:
+
+ * A reference to a handler class itself, like `Handler` or `Droonga::Plugins::FooPlugin::Handler`.
+   Of course, the class have to be already defined at the time.
+ * A symbol which refers the name of a handler class in the current namespace, like `:Handler`.
+   This is useful if you want to describe the step at first and define the actual class after that.
+ * A class path string of a handler class, like `"Droonga::Plugins::FooPlugin::Handler"`.
+   This is also useful to define the class itself after the description.
+
+You must define the referenced class by the time the Droonga Engine actually processes the step, if you specify the name of the handler class as a symbol or a string.
+If the Droonga Engine fails to find out the actual handler class, or no handler is specified, then the Droonga Engine does nothing for the request.
+
+#### `#collector`, `#collector=(collector)` {#classes-Droonga-SingleStepDefinition-collector}
+
+Associates a specific collector class to the step itself.
+You can specify the class as any one of following choices:
+
+ * A reference to a collector class itself, like `Collectors::Something` or `Droonga::Plugins::FooPlugin::MyCollector`.
+   Of course, the class have to be already defined at the time.
+ * A symbol which refers the name of a collector class in the current namespace, like `:MyCollector`.
+   This is useful if you want to describe the step at first and define the actual class after that.
+ * A class path string of a collector class, like `"Droonga::Plugins::FooPlugin::MyCollector"`.
+   This is also useful to define the class itself after the description.
+
+You must define the referenced class by the time the Droonga Engine actually collects results, if you specify the name of the collector class as a symbol or a string.
+If the Droonga Engine fails to find out the actual collector class, or no collector is specified, then the Droonga Engine doesn't collect results and returns multiple messages as results.
+
+See also [descriptions of collectors][collector].
+
+#### `#write`, `#write=(write)` {#classes-Droonga-SingleStepDefinition-write}
+
+Describes whether the step modifies any data in the storage or don't.
+If a request aims to modify some data in the storage, the request must be processed for all replicas.
+Otherwise the Droonga Engine can optimize handling of the step.
+For example, caching of results, reducing of CPU/memory usage, and so on.
+
+Possible values are:
+
+ * `true`, means "this step can modify the storage."
+ * `false`, means "this step never modifies the storage." (default)
+
+#### `#inputs`, `#inputs=(inputs)` {#classes-Droonga-SingleStepDefinition-inputs}
+
+(TBD)
+
+#### `#output`, `#output=(output)` {#classes-Droonga-SingleStepDefinition-output}
+
+(TBD)
+
+### `Droonga::Handler` {#classes-Droonga-Handler}
+
+This is the common base class of any handler.
+Your plugin's handler class must inherit this.
+
+#### `#handle(message)` {#classes-Droonga-Handler-handle}
+
+This method receives a [`Droonga::HandlerMessage`](#classes-Droonga-HandlerMessage) wrapped task message.
+You can read the request information via its methods.
+
+In this base class, this method is defined as just a placeholder and it does nothing.
+To process messages, you have to override it by yours, like following:
+
+~~~ruby
+module Droonga::Plugins::MySearch
+  class Handler < Droonga::Handler
+    def handle(message)
+      search_query = message.request["body"]["query"]
+      ...
+      { ... } # the result
+    end
+  end
+end
+~~~
+
+The Droonga Engine uses the returned value of this method as the result of the handling.
+It will be used to build the body of the unified response, and delivered to the Protocol Adapter.
+
+
+### `Droonga::HandlerMessage` {#classes-Droonga-HandlerMessage}
+
+This is a wrapper for a task message.
+
+The Droonga Engine analyzes a transferred request message, and build multiple task massages to process the request.
+A task massage has some information: a request, a step, descendant tasks, and so on.
+
+#### `#request` {#classes-Droonga-HandlerMessage-request}
+
+This returns the request message.
+You can read request body via this method. For example:
+
+~~~ruby
+module Droonga::Plugins::MySearch
+  class Handler < Droonga::Handler
+    def handle(message)
+      request = message.request
+      search_query = request["body"]["query"]
+      ...
+    end
+  end
+end
+~~~
+
+#### `@context` {#classes-Droonga-HandlerMessage-context}
+
+This is a reference to the `Groonga::Context` instance for the storage of the corresponding volume.
+See the [class reference of Rroonga][Groonga::Context].
+
+You can use any feature of Rroonga via `@context`.
+For example, this code returns the number of records in the specified table:
+
+~~~ruby
+module Droonga::Plugins::CountRecords
+  class Handler < Droonga::Handler
+    def handle(message)
+      request = message.request
+      table_name = request["body"]["table"]
+      count = @context[table_name].size
+    end
+  end
+end
+~~~
+
+  [error handling]: ../error/
+  [collector]: ../collector/
+  [Groonga::Context]: http://ranguba.org/rroonga/en/Groonga/Context.html

  Added: reference/1.1.0/plugin/index.md (+13 -0) 100644
===================================================================
--- /dev/null
+++ reference/1.1.0/plugin/index.md    2014-11-30 23:20:40 +0900 (89e7398)
@@ -0,0 +1,13 @@
+---
+title: Plugin development
+layout: en
+---
+
+Droonga Engine has different API sets for plugins, on each phase.
+See also the [plugin development tutorial](../../tutorial/plugin-development/).
+
+ * [API set for the adaption phase](adapter/)
+ * [API set for the handling phase](handler/)
+ * [Matching pattern for messages](matching-pattern/)
+ * [Collector](collector/)
+ * [Error handling](error/)

  Added: reference/1.1.0/plugin/matching-pattern/index.md (+233 -0) 100644
===================================================================
--- /dev/null
+++ reference/1.1.0/plugin/matching-pattern/index.md    2014-11-30 23:20:40 +0900 (0df1b64)
@@ -0,0 +1,233 @@
+---
+title: Matching pattern for messages
+layout: en
+---
+
+* TOC
+{:toc}
+
+
+## Abstract {#abstract}
+
+The Droonga Engine provides a tiny language to specify patterns of messages, called *matching pattern*.
+It is used to specify target messages of various operations, ex. plugins.
+
+
+## Examples {#examples}
+
+### Simple matching
+
+    pattern = ["type", :equal, "search"]
+
+This matches to messages like:
+
+    {
+      "type": "search",
+      ...
+    }
+
+### Matching for a deep target
+
+    pattern = ["body.success", :equal, true]
+
+This matches to messages like:
+
+    {
+      "type": "add.result",
+      "body": {
+        "success": true
+      }
+    }
+
+Doesn't match to:
+
+    {
+      "type": "add.result",
+      "body": {
+        "success": false
+      }
+    }
+
+### Nested patterns
+
+    pattern = [
+                 ["type", :equal, "table_create"],
+                 :or,
+                 ["body.success", :equal, true]
+              ]
+
+This matches to both:
+
+    {
+      "type": "table_create",
+      ...
+    }
+
+and:
+
+    {
+      "type": "column_create",
+      ...
+      "body": {
+        "success": true
+      }
+    }
+
+
+## Syntax {#syntax}
+
+There are two typeos of matching patterns: "basic pattern" and "nested pattern".
+
+### Basic pattern {#syntax-basic}
+
+#### Structure {#syntax-basic-structure}
+
+A basic pattern is described as an array including 2 or more elements, like following:
+
+    ["type", :equal, "search"]
+
+ * The first element is a *target path*. It means the location of the information to be checked, in the [message][].
+ * The second element is an *operator*. It means how the information specified by the target path should be checked.
+ * The third element is an *argument for the oeprator*. It is a primitive value (string, numeric, or boolean) or an array of values. Some operators require no argument.
+
+#### Target path {#syntax-basic-target-path}
+
+The target path is specified as a string, like:
+
+    "body.success"
+
+The matching mechanism of the Droonga Engine interprets it as a dot-separated list of *path components*.
+A path component represents the property in the message with same name.
+So, the example above means the location:
+
+    {
+      "body": {
+        "success": <target>
+      }
+    }
+
+
+
+
+#### Avialable operators {#syntax-basic-operators}
+
+The operator is specified as a symbol.
+
+`:equal`
+: Returns `true`, if the target value is equal to the given value. Otherwise `false`.
+  For example,
+  
+      ["type", :equal, "search"]
+  
+  The pattern above matches to a message like following:
+  
+      {
+        "type": "search",
+        ...
+      }
+
+`:in`
+: Returns `true`, if the target value is in the given array of values. Otherwise `false`.
+  For example,
+  
+      ["type", :in, ["search", "select"]]
+  
+  The pattern above matches to a message like following:
+  
+      {
+        "type": "select",
+        ...
+      }
+  
+  But it doesn't match to:
+  
+      {
+        "type": "find",
+        ...
+      }
+
+`:include`
+: Returns `true` if the target array of values includes the given value. Otherwise `false`.
+  In other words, this is the opposite of the `:in` operator.
+  For example,
+  
+      ["body.tags", :include, "News"]
+  
+  The pattern above matches to a message like following:
+  
+      {
+        "type": "my.notification",
+        "body": {
+          "tags": ["News", "Groonga", "Droonga", "Fluentd"]
+        }
+      }
+
+`:exist`
+: Returns `true` if the target exists. Otherwise `false`.
+  For example,
+  
+      ["body.comments", :exist, "News"]
+  
+  The pattern above matches to a message like following:
+  
+      {
+        "type": "my.notification",
+        "body": {
+          "title": "Hello!",
+          "comments": []
+        }
+      }
+  
+  But it doesn't match to:
+  
+      {
+        "type": "my.notification",
+        "body": {
+          "title": "Hello!"
+        }
+      }
+
+`:start_with`
+: Returns `true` if the target string value starts with the given string. Otherwise `false`.
+  For example,
+  
+      ["body.path", :start_with, "/archive/"]
+  
+  The pattern above matches to a message like following:
+  
+      {
+        "type": "my.notification",
+        "body": {
+          "path": "/archive/2014/02/28.html"
+        }
+      }
+
+
+### Nested pattern {#syntax-nested}
+
+#### Structure {#syntax-nested-structure}
+
+A nested pattern is described as an array including 3 elements, like following:
+
+    [
+      ["type", :equal, "table_create"],
+      :or,
+      ["type", :equal, "column_create"]
+    ]
+
+ * The first and the third elements are patterns, basic or nested. (In other words, you can nest patterns recursively.)
+ * The second element is a *logical operator*.
+
+#### Avialable operators {#syntax-nested-operators}
+
+`:and`
+: Returns `true` if both given patterns are evaluated as `true`. Otherwise `false`.
+
+`:or`
+: Returns `true` if one of given patterns (the first or the third element) is evaluated as `true`. Otherwise `false`.
+
+
+
+
+  [message]:../../message/
+

  Added: tutorial/1.1.0/add-replica/index.md (+383 -0) 100644
===================================================================
--- /dev/null
+++ tutorial/1.1.0/add-replica/index.md    2014-11-30 23:20:40 +0900 (54ee22c)
@@ -0,0 +1,383 @@
+---
+title: "Droonga tutorial: How to add a new replica to an existing cluster?"
+layout: en
+---
+
+* TOC
+{:toc}
+
+## The goal of this tutorial
+
+Learning steps to add a new replica node, remove an existing replica, and replace a replica with new one, for your existing [Droonga][] cluster.
+
+## Precondition
+
+* You must have an existing Droonga cluster with some data.
+  Please complete the ["getting started" tutorial](../groonga/) before this.
+* You must know how to duplicate data between multiple clusters.
+  Please complete the ["How to backup and restore the database?" tutorial](../dump-restore/) before this.
+
+This tutorial assumes that there are two existing Droonga nodes prepared by the [first tutorial](../groonga/): `node0` (`192.168.100.50`) and `node1` (`192.168.100.51`), and there is another computer `node2` (`192.168.100.52`) for a new node.
+If you have Droonga nodes with other names, read `node0`, `node1` and `node2` in following descriptions as yours.
+
+## What's "replica"?
+
+There are two axes, "replica" and "slice", for Droonga nodes.
+
+All "replica" nodes have completely equal data, so they can process your requests (ex. "search") parallelly.
+You can increase the capacity of your cluster to process increasing requests, by adding new replicas.
+
+On the other hand, "slice" nodes have different data, for example, one node contains data of the year 2013, another has data of 2014.
+You can increase the capacity of your cluster to store increasing data, by adding new slices.
+
+Currently, for a Droonga cluster which is configured as a Groonga compatible system, only replicas can be added, but slices cannot be done.
+We'll improve extensibility for slices in the future.
+
+Anyway, this tutorial explains how to add a new replica node to an existing Droogna cluster.
+Here we go!
+
+## Add a new replica node to an existing cluster
+
+In this case you don't have to stop the cluster working, for any read-only requests like "search".
+You can add a new replica, in the backstage, without downing your service.
+
+On the other hand, you have to stop inpouring of new data to the cluster until the new node starts working.
+(In the future we'll provide mechanism to add new nodes completely silently without any stopping of data-flow, but currently can't.)
+
+Assume that there is a Droonga cluster constructed with two replica nodes `node0` and `node1`, and we are going to add a new replica node `node2`.
+
+### Setup a new node
+
+First, prepare a new computer, install required softwares and configure them.
+
+~~~
+(on node2)
+# curl https://raw.githubusercontent.com/droonga/droonga-engine/master/install.sh | \
+    HOST=node2 bash
+# curl https://raw.githubusercontent.com/droonga/droonga-http-server/master/install.sh | \
+    ENGINE_HOST=node2 HOST=node2 bash
+~~~
+
+Note, you cannot add a non-empty node to an existing cluster.
+If the computer was used as a Droonga node in old days, then you must clear old data at first.
+
+~~~
+(on node2)
+# droonga-engine-configure --quiet \
+                           --clear --reset-config --reset-catalog \
+                           --host=node2
+# droonga-http-server-configure --quiet --reset-config \
+                                --droonga-engine-host-name=node2 \
+                                --receive-host-name=node2
+~~~
+
+Let's start services.
+
+~~~
+(on node2)
+# service droonga-engine start
+# service droonga-http-server start
+~~~
+
+Currently, the new node doesn't work as a node of the existing cluster.
+You can confirm that, via the `system.status` command:
+
+~~~
+$ curl "http://node0:10041/droonga/system/status" | jq "."
+{
+  "nodes": {
+    "node0:10031/droonga": {
+      "live": true
+    },
+    "node1:10031/droonga": {
+      "live": true
+    }
+  }
+}
+$ curl "http://node1:10041/droonga/system/status" | jq "."
+{
+  "nodes": {
+    "node0:10031/droonga": {
+      "live": true
+    },
+    "node1:10031/droonga": {
+      "live": true
+    }
+  }
+}
+$ curl "http://node2:10041/droonga/system/status" | jq "."
+{
+  "nodes": {
+    "node2:10031/droonga": {
+      "live": true
+    }
+  }
+}
+~~~
+
+### Suspend inpouring of "write" requests
+
+Before starting to change cluster composition, you must suspend inpouring of "write" requests to the cluster, because we have to synchronize data to the new replica.
+Otherwise, the new added replica will contain incomplete data and results for requests to the cluster become unstable.
+
+What's "write" request?
+In particular, these commands modify data in the cluster:
+
+ * `add`
+ * `column_create`
+ * `column_remove`
+ * `delete`
+ * `load`
+ * `table_create`
+ * `table_remove`
+
+If you load new data via the `load` command triggered by a batch script started as a cronjob, disable the job.
+If a crawler agent adds new data via the `add` command, stop it.
+If you put a fluentd as a buffer between crawler or loader and the cluster, stop outgoing messages from the buffer. 
+
+If you are reading this tutorial sequentially after the [previous topic](../dump-restore/), there is no incoming requests, so you have nothing to do.
+
+### Joining a new replica node to the cluster
+
+To add a new replica node to an existing cluster, you just run a command `droonga-engine-join` on one of existing replica nodes or the new replica node, in the directory the `catalog.json` is located, like:
+
+~~~
+(on node2)
+$ droonga-engine-join --host=node2 \
+                      --replica-source-host=node0 \
+                      --receiver-host=node2
+Start to join a new node node2
+       to the cluster of node0
+                     via node2 (this host)"
+
+Joining new replica to the cluster...
+...
+Update existing hosts in the cluster...
+...
+Done.
+~~~
+
+You can run the command on different node, like:
+
+~~~
+(on node1)
+$ droonga-engine-join --host=node2 \
+                      --replica-source-host=node0 \
+                      --receiver-host=node1
+Start to join a new node node2
+       to the cluster of node0
+                     via node1 (this host)"
+~~~
+
+ * You must specify the host name (or the IP address) of the new replica node, via the `--host` option.
+ * You must specify the host name (or the IP address) of an existing node of the cluster, via the `--replica-source-host` option.
+ * You must specify the host name (or the IP address) of the working machine via the `--receiver-host` option.
+
+Then the command automatically starts to synchronize all data of the cluster to the new replica node.
+After data is successfully synchronized, the node restarts and joins to the cluster automatically.
+All nodes' `catalog.json` are also updated, and now, yes, the new node starts working as a replica in the cluster.
+
+You can confirm that they are working as a cluster, via the `system.status` command:
+
+~~~
+$ curl "http://node0:10041/droonga/system/status" | jq "."
+{
+  "nodes": {
+    "node0:10031/droonga": {
+      "live": true
+    },
+    "node1:10031/droonga": {
+      "live": true
+    },
+    "node2:10031/droonga": {
+      "live": true
+    }
+  }
+}
+~~~
+
+Because the new node `node2` has become a member of the cluster, `droonga-http-server` on each node distributes messages to `node2` also automatically.
+
+
+### Resume inpouring of "write" requests
+
+OK, it's the time.
+Because all replica nodes are completely synchronized, the cluster now can process any request stably.
+Resume inpouring of requests which can modify the data in the cluster - cronjobs, crawlers, buffers, and so on.
+
+With that, a new replica node has joined to your Droonga cluster successfully.
+
+
+## Remove an existing replica node from an existing cluster
+
+A Droonga node can die by various fatal reasons - for example, OOM killer, disk-full error, troubles around its hardware, etc.
+Because nodes in a Droonga cluster observe each other and they stop delivering messages to dead nodes automatically, the cluster keeps working even if there are some dead nodes.
+Then you have to remove dead nodes from the cluster.
+
+Of course, even if a node is still working, you may plan to remove it to reuse for another purpose.
+
+Assume that there is a Droonga cluster constructed with trhee replica nodes `node0`, `node1` and `node2`, and planning to remove the last node `node2` from the cluster.
+
+### Unjoin an existing replica from the cluster
+
+To remove a replica from an existing cluster, you just run the `droonga-engine-unjoin` command on any existing node in the cluster, like:
+
+~~~
+(on node0)
+$ droonga-engine-unjoin --host=node2 \
+                        --receiver-host=node0
+Start to unjoin a node node2
+                    by node0 (this host)
+
+Unjoining replica from the cluster...
+...
+Done.
+~~~
+
+ * You must specify the host name (or the IP address) of an existing node to be removed from the cluster, via the `--host` option.
+ * You must specify the host name (or the IP address) of the working machine via the `--receiver-host` option.
+
+Then the specified node automatically unjoins from the cluster, and all nedes' `catalog.json` are also updated.
+Now, the node has been successfully unjoined from the cluster.
+
+You can confirm that the `node2` is successfully unjoined, via the `system.status` command:
+
+~~~
+$ curl "http://node0:10041/droonga/system/status" | jq "."
+{
+  "nodes": {
+    "node0:10031/droonga": {
+      "live": true
+    },
+    "node1:10031/droonga": {
+      "live": true
+    }
+  }
+}
+$ curl "http://node1:10041/droonga/system/status" | jq "."
+{
+  "nodes": {
+    "node0:10031/droonga": {
+      "live": true
+    },
+    "node1:10031/droonga": {
+      "live": true
+    }
+  }
+}
+$ curl "http://node2:10041/droonga/system/status" | jq "."
+{
+  "nodes": {
+    "node2:10031/droonga": {
+      "live": true
+    }
+  }
+}
+~~~
+
+Because the node `node2` is not a member of the cluster anymore, `droonga-http-server` on `node0` and `node1` never send messages to the `droonga-engine` on `node2`.
+On the other hand, because `droonga-http-server` on `node2` is associated only to the `droonga-engine` on same node, it never sends messages to other nodes.
+
+
+
+## Replace an existing replica node in a cluster with a new one
+
+Replacing of nodes is a combination of those instructions above.
+
+Assume that there is a Droonga cluster constructed with two replica nodes `node0` and `node1`, the node `node1` is unstable, and planning to replace it with a new node `node2`.
+
+### Unjoin an existing replica from the cluster
+
+First, remove the unstable node.
+Remove the node from the cluster, like:
+
+~~~
+(on node0)
+$ droonga-engine-unjoin --host=node1
+~~~
+
+Now the node has been gone.
+You can confirm that via the `system.status` command:
+
+~~~
+$ curl "http://node0:10041/droonga/system/status" | jq "."
+{
+  "nodes": {
+    "node0:10031/droonga": {
+      "live": true
+    }
+  }
+}
+~~~
+
+### Add a new replica
+
+Next, setup the new replica `node2`.
+Install required packages, generate the `catalog.json`, and start services.
+
+~~~
+(on node2)
+# curl https://raw.githubusercontent.com/droonga/droonga-engine/master/install.sh | \
+    HOST=node2 bash
+# curl https://raw.githubusercontent.com/droonga/droonga-http-server/master/install.sh | \
+    ENGINE_HOST=node2 HOST=node2 bash
+~~~
+
+If the computer was used as a Droonga node in old days, then you must clear old data instead of installation:
+
+~~~
+(on node2)
+# droonga-engine-configure --quiet \
+                           --clear --reset-config --reset-catalog \
+                           --host=node2
+# droonga-http-server-configure --quiet --reset-config \
+                                --droonga-engine-host-name=node2 \
+                                --receive-host-name=node2
+~~~
+
+Then, join the node to the cluster.
+
+~~~
+(on node2)
+$ droonga-engine-join --host=node2 \
+                      --replica-source-host=node0
+~~~
+
+Finally a Droonga cluster constructed with two nodes `node0` and `node2` is here.
+
+You can confirm that, via the `system.status` command:
+
+~~~
+$ curl "http://node0:10041/droonga/system/status" | jq "."
+{
+  "nodes": {
+    "node0:10031/droonga": {
+      "live": true
+    },
+    "node2:10031/droonga": {
+      "live": true
+    }
+  }
+}
+$ curl "http://node2:10041/droonga/system/status" | jq "."
+{
+  "nodes": {
+    "node0:10031/droonga": {
+      "live": true
+    },
+    "node2:10031/droonga": {
+      "live": true
+    }
+  }
+}
+~~~
+
+## Conclusion
+
+In this tutorial, you did add a new replica node to an existing [Droonga][] cluster.
+Moreover, you did remove an existing replica, and did replace a replica with a new one.
+
+  [Ubuntu]: http://www.ubuntu.com/
+  [Droonga]: https://droonga.org/
+  [Groonga]: http://groonga.org/
+  [command reference]: ../../reference/commands/

  Added: tutorial/1.1.0/basic/index.md (+1119 -0) 100755
===================================================================
--- /dev/null
+++ tutorial/1.1.0/basic/index.md    2014-11-30 23:20:40 +0900 (d6e9e08)
@@ -0,0 +1,1119 @@
+---
+title: "Droonga tutorial: Basic usage of low-layer commands"
+layout: en
+---
+
+* TOC
+{:toc}
+
+## The goal of this tutorial
+
+Learning steps to setup a Droonga based search system by yourself, with low-layer commands of Droonga.
+
+## Precondition
+
+* You must have basic knowledge and experiences to setup and operate an [Ubuntu][] or [CentOS][] Server.
+* You must have basic knowledge and experiences to develop applications based on the [Ruby][] and the [Node.js][].
+
+## Abstract
+
+### What is the Droonga?
+
+It is a data processing engine based on a distributed architecture, named after the terms "distributed-Groonga".
+
+The Droonga is built on some components which are made as separated packages. You can develop various data processing systems (for example, a fulltext search engine) with high scalability from a distributed architecture, with those packages.
+
+### Components of the Droonga
+
+#### Droonga Engine
+
+The component "Droonga Engine" is the main part to process data with a distributed architecture. It is triggered by requests and processes various data.
+
+This component is developed and released as the [droonga-engine][].
+The protocol is compatible to [Fluentd].
+
+It internally uses [Groonga][] as its search engine.
+Groonga is an open source, fulltext search engine, including a column-store feature.
+
+#### Protocol Adapter
+
+The component "Protocol Adapter" provides ability for clients to communicate with a Droonga engine, using various protocols.
+
+The only one available protocol of a Droonga engine is the fluentd protocol.
+Instead, protocol adapters translate it to other common protocols (like HTTP, Socket.OP, etc.) between the Droonga Engine and clients.
+
+Currently, there is an implementation for the HTTP: [droonga-http-server][], a [Node.js][] module package.
+In other words, the droonga-http-server is one of Droonga Progocol Adapters, and it's a "Droonga HTTP Protocol Adapter".
+
+## Abstract of the system described in this tutorial
+
+This tutorial describes steps to build a system like following:
+
+    +-------------+              +------------------+             +----------------+
+    | Web Browser |  <-------->  | Protocol Adapter |  <------->  | Droonga Engine |
+    +-------------+   HTTP       +------------------+   Fluent    +----------------+
+                                 w/droonga-http        protocol   w/droonga-engine
+                                           -server
+
+
+                                 \--------------------------------------------------/
+                                       This tutorial describes about this part.
+
+User agents (ex. a Web browser) send search requests to a protocol adapter. The adapter receives them, and sends internal (translated) search requests to a Droonga engine. The engine processes them actually. Search results are sent from the engine to the protocol adapter, and finally delivered to the user agents.
+
+For example, let's try to build a database system to find [Starbucks stores in New York](http://geocommons.com/overlays/430038).
+
+
+## Prepare an environment for experiments
+
+Prepare a computer at first. This tutorial describes steps to develop a search service based on the Droonga, on an existing computer.
+Following instructions are basically written for a successfully prepared virtual machine of the `Ubuntu 14.04 x64`, `CentOS 7 x64`, or or `CentOS 6.5 x64` on the service [DigitalOcean](https://www.digitalocean.com/), with an available console.
+
+NOTE: Make sure to use instances with >= 2GB memory equipped, at least during installation of required packages for Droonga. Otherwise, you possibly experience a strange build error.
+
+Assume that the host is `192.168.100.50`.
+
+## Install Droonga engine
+
+The part "Droonga engine" stores the database and provides the search feature actually.
+In this section we install a droonga-engine and load searchable data to the database.
+
+### Install `droonga-engine`
+
+Download the installation script and run it by `bash` as the root user:
+
+~~~
+# curl https://raw.githubusercontent.com/droonga/droonga-engine/master/install.sh | \
+    bash
+...
+Installing droonga-engine from RubyGems...
+...
+Preparing the user...
+...
+Setting up the configuration directory...
+This node is configured with a hostname XXXXXXXX.
+
+Registering droonga-engine as a service...
+...
+Successfully installed droonga-engine.
+~~~
+
+### Prepare configuration files to start `droonga-engine`
+
+All configuration files and physical databases are placed under a `droonga` directory in the home directory of the service user `droonga-engine`:
+
+    $ cd ~droonga-engine/droonga
+
+Then, put (overwrite) a configuration file `catalog.json` like following, into the directory:
+
+catalog.json:
+
+    {
+      "version": 2,
+      "effectiveDate": "2013-09-01T00:00:00Z",
+      "datasets": {
+        "Default": {
+          "nWorkers": 4,
+          "plugins": ["groonga", "crud", "search", "dump", "status"],
+          "schema": {
+            "Store": {
+              "type": "Hash",
+              "keyType": "ShortText",
+              "columns": {
+                "location": {
+                  "type": "Scalar",
+                  "valueType": "WGS84GeoPoint"
+                }
+              }
+            },
+            "Location": {
+              "type": "PatriciaTrie",
+              "keyType": "WGS84GeoPoint",
+              "columns": {
+                "store": {
+                  "type": "Index",
+                  "valueType": "Store",
+                  "indexOptions": {
+                    "sources": ["location"]
+                  }
+                }
+              }
+            },
+            "Term": {
+              "type": "PatriciaTrie",
+              "keyType": "ShortText",
+              "normalizer": "NormalizerAuto",
+              "tokenizer": "TokenBigram",
+              "columns": {
+                "stores__key": {
+                  "type": "Index",
+                  "valueType": "Store",
+                  "indexOptions": {
+                    "position": true,
+                    "sources": ["_key"]
+                  }
+                }
+              }
+            }
+          },
+          "replicas": [
+            {
+              "dimension": "_key",
+              "slicer": "hash",
+              "slices": [
+                {
+                  "volume": {
+                    "address": "192.168.100.50:10031/droonga.000"
+                  }
+                },
+                {
+                  "volume": {
+                    "address": "192.168.100.50:10031/droonga.001"
+                  }
+                },
+                {
+                  "volume": {
+                    "address": "192.168.100.50:10031/droonga.002"
+                  }
+                }
+              ]
+            },
+            {
+              "dimension": "_key",
+              "slicer": "hash",
+              "slices": [
+                {
+                  "volume": {
+                    "address": "192.168.100.50:10031/droonga.010"
+                  }
+                },
+                {
+                  "volume": {
+                    "address": "192.168.100.50:10031/droonga.011"
+                  }
+                },
+                {
+                  "volume": {
+                    "address": "192.168.100.50:10031/droonga.012"
+                  }
+                }
+              ]
+            }
+          ]
+        }
+      }
+    }
+
+This `catalog.json` defines a dataset `Default` as:
+
+ * At the top level, there is one volume based on two sub volumes, called "replicas".
+ * At the next lower level, one replica volume is based on three sub volumes, called "slices".
+   They are minimum elements constructing a Droonga's dataset.
+
+These six atomic volumes having `"address"` information are internally called as *single volume*s.
+The `"address"` indicates the location of the corresponding physical storage which is a database for Groonga, they are managed by `droonga-engine` instances automatically.
+
+For more details of the configuration file `catalog.json`, see [the reference manual of catalog.json](/reference/catalog).
+
+### Start and stop the `droonga-engine` service
+
+The `droonga-engine` service can be started via the `service` command:
+
+~~~
+# service droonga-engine start
+~~~
+
+To stop it, you also have to use the `service` command:
+
+~~~
+# service droonga-engine stop
+~~~
+
+After confirmation, start the `droonga-engine` again.
+
+~~~
+# service droonga-engine start
+~~~
+
+### Create a database
+
+After a Droonga engine is started, let's load data.
+Prepare `stores.jsons` including location data of stores.
+
+stores.jsons:
+
+~~~
+{
+  "dataset": "Default",
+  "type": "add",
+  "body": {
+    "table": "Store",
+    "key": "1st Avenue & 75th St. - New York NY  (W)",
+    "values": {
+      "location": "40.770262,-73.954798"
+    }
+  }
+}
+{
+  "dataset": "Default",
+  "type": "add",
+  "body": {
+    "table": "Store",
+    "key": "76th & Second - New York NY  (W)",
+    "values": {
+      "location": "40.771056,-73.956757"
+    }
+  }
+}
+{
+  "dataset": "Default",
+  "type": "add",
+  "body": {
+    "table": "Store",
+    "key": "2nd Ave. & 9th Street - New York NY",
+    "values": {
+      "location": "40.729445,-73.987471"
+    }
+  }
+}
+{
+  "dataset": "Default",
+  "type": "add",
+  "body": {
+    "table": "Store",
+    "key": "15th & Third - New York NY  (W)",
+    "values": {
+      "location": "40.733946,-73.9867"
+    }
+  }
+}
+{
+  "dataset": "Default",
+  "type": "add",
+  "body": {
+    "table": "Store",
+    "key": "41st and Broadway - New York NY  (W)",
+    "values": {
+      "location": "40.755111,-73.986225"
+    }
+  }
+}
+{
+  "dataset": "Default",
+  "type": "add",
+  "body": {
+    "table": "Store",
+    "key": "84th & Third Ave - New York NY  (W)",
+    "values": {
+      "location": "40.777485,-73.954979"
+    }
+  }
+}
+{
+  "dataset": "Default",
+  "type": "add",
+  "body": {
+    "table": "Store",
+    "key": "150 E. 42nd Street - New York NY  (W)",
+    "values": {
+      "location": "40.750784,-73.975582"
+    }
+  }
+}
+{
+  "dataset": "Default",
+  "type": "add",
+  "body": {
+    "table": "Store",
+    "key": "West 43rd and Broadway - New York NY  (W)",
+    "values": {
+      "location": "40.756197,-73.985624"
+    }
+  }
+}
+{
+  "dataset": "Default",
+  "type": "add",
+  "body": {
+    "table": "Store",
+    "key": "Macy's 35th Street Balcony - New York NY",
+    "values": {
+      "location": "40.750703,-73.989787"
+    }
+  }
+}
+{
+  "dataset": "Default",
+  "type": "add",
+  "body": {
+    "table": "Store",
+    "key": "Macy's 6th Floor - Herald Square - New York NY  (W)",
+    "values": {
+      "location": "40.750703,-73.989787"
+    }
+  }
+}
+{
+  "dataset": "Default",
+  "type": "add",
+  "body": {
+    "table": "Store",
+    "key": "Herald Square- Macy's - New York NY",
+    "values": {
+      "location": "40.750703,-73.989787"
+    }
+  }
+}
+{
+  "dataset": "Default",
+  "type": "add",
+  "body": {
+    "table": "Store",
+    "key": "Macy's 5th Floor - Herald Square - New York NY  (W)",
+    "values": {
+      "location": "40.750703,-73.989787"
+    }
+  }
+}
+{
+  "dataset": "Default",
+  "type": "add",
+  "body": {
+    "table": "Store",
+    "key": "80th & York - New York NY  (W)",
+    "values": {
+      "location": "40.772204,-73.949862"
+    }
+  }
+}
+{
+  "dataset": "Default",
+  "type": "add",
+  "body": {
+    "table": "Store",
+    "key": "Columbus @ 67th - New York NY  (W)",
+    "values": {
+      "location": "40.774009,-73.981472"
+    }
+  }
+}
+{
+  "dataset": "Default",
+  "type": "add",
+  "body": {
+    "table": "Store",
+    "key": "45th & Broadway - New York NY  (W)",
+    "values": {
+      "location": "40.75766,-73.985719"
+    }
+  }
+}
+{
+  "dataset": "Default",
+  "type": "add",
+  "body": {
+    "table": "Store",
+    "key": "Marriott Marquis - Lobby - New York NY",
+    "values": {
+      "location": "40.759123,-73.984927"
+    }
+  }
+}
+{
+  "dataset": "Default",
+  "type": "add",
+  "body": {
+    "table": "Store",
+    "key": "Second @ 81st - New York NY  (W)",
+    "values": {
+      "location": "40.77466,-73.954447"
+    }
+  }
+}
+{
+  "dataset": "Default",
+  "type": "add",
+  "body": {
+    "table": "Store",
+    "key": "52nd & Seventh - New York NY  (W)",
+    "values": {
+      "location": "40.761829,-73.981141"
+    }
+  }
+}
+{
+  "dataset": "Default",
+  "type": "add",
+  "body": {
+    "table": "Store",
+    "key": "1585 Broadway (47th) - New York NY  (W)",
+    "values": {
+      "location": "40.759806,-73.985066"
+    }
+  }
+}
+{
+  "dataset": "Default",
+  "type": "add",
+  "body": {
+    "table": "Store",
+    "key": "85th & First - New York NY  (W)",
+    "values": {
+      "location": "40.776101,-73.949971"
+    }
+  }
+}
+{
+  "dataset": "Default",
+  "type": "add",
+  "body": {
+    "table": "Store",
+    "key": "92nd & 3rd - New York NY  (W)",
+    "values": {
+      "location": "40.782606,-73.951235"
+    }
+  }
+}
+{
+  "dataset": "Default",
+  "type": "add",
+  "body": {
+    "table": "Store",
+    "key": "165 Broadway - 1 Liberty - New York NY  (W)",
+    "values": {
+      "location": "40.709727,-74.011395"
+    }
+  }
+}
+{
+  "dataset": "Default",
+  "type": "add",
+  "body": {
+    "table": "Store",
+    "key": "1656 Broadway - New York NY  (W)",
+    "values": {
+      "location": "40.762434,-73.983364"
+    }
+  }
+}
+{
+  "dataset": "Default",
+  "type": "add",
+  "body": {
+    "table": "Store",
+    "key": "54th & Broadway - New York NY  (W)",
+    "values": {
+      "location": "40.764275,-73.982361"
+    }
+  }
+}
+{
+  "dataset": "Default",
+  "type": "add",
+  "body": {
+    "table": "Store",
+    "key": "Limited Brands-NYC - New York NY",
+    "values": {
+      "location": "40.765219,-73.982025"
+    }
+  }
+}
+{
+  "dataset": "Default",
+  "type": "add",
+  "body": {
+    "table": "Store",
+    "key": "19th & 8th - New York NY  (W)",
+    "values": {
+      "location": "40.743218,-74.000605"
+    }
+  }
+}
+{
+  "dataset": "Default",
+  "type": "add",
+  "body": {
+    "table": "Store",
+    "key": "60th & Broadway-II - New York NY  (W)",
+    "values": {
+      "location": "40.769196,-73.982576"
+    }
+  }
+}
+{
+  "dataset": "Default",
+  "type": "add",
+  "body": {
+    "table": "Store",
+    "key": "63rd & Broadway - New York NY  (W)",
+    "values": {
+      "location": "40.771376,-73.982709"
+    }
+  }
+}
+{
+  "dataset": "Default",
+  "type": "add",
+  "body": {
+    "table": "Store",
+    "key": "195 Broadway - New York NY  (W)",
+    "values": {
+      "location": "40.710703,-74.009485"
+    }
+  }
+}
+{
+  "dataset": "Default",
+  "type": "add",
+  "body": {
+    "table": "Store",
+    "key": "2 Broadway - New York NY  (W)",
+    "values": {
+      "location": "40.704538,-74.01324"
+    }
+  }
+}
+{
+  "dataset": "Default",
+  "type": "add",
+  "body": {
+    "table": "Store",
+    "key": "2 Columbus Ave. - New York NY  (W)",
+    "values": {
+      "location": "40.769262,-73.984764"
+    }
+  }
+}
+{
+  "dataset": "Default",
+  "type": "add",
+  "body": {
+    "table": "Store",
+    "key": "NY Plaza - New York NY  (W)",
+    "values": {
+      "location": "40.702802,-74.012784"
+    }
+  }
+}
+{
+  "dataset": "Default",
+  "type": "add",
+  "body": {
+    "table": "Store",
+    "key": "36th and Madison - New York NY  (W)",
+    "values": {
+      "location": "40.748917,-73.982683"
+    }
+  }
+}
+{
+  "dataset": "Default",
+  "type": "add",
+  "body": {
+    "table": "Store",
+    "key": "125th St. btwn Adam Clayton & FDB - New York NY",
+    "values": {
+      "location": "40.808952,-73.948229"
+    }
+  }
+}
+{
+  "dataset": "Default",
+  "type": "add",
+  "body": {
+    "table": "Store",
+    "key": "70th & Broadway - New York NY  (W)",
+    "values": {
+      "location": "40.777463,-73.982237"
+    }
+  }
+}
+{
+  "dataset": "Default",
+  "type": "add",
+  "body": {
+    "table": "Store",
+    "key": "2138 Broadway - New York NY  (W)",
+    "values": {
+      "location": "40.781078,-73.981167"
+    }
+  }
+}
+{
+  "dataset": "Default",
+  "type": "add",
+  "body": {
+    "table": "Store",
+    "key": "118th & Frederick Douglas Blvd. - New York NY  (W)",
+    "values": {
+      "location": "40.806176,-73.954109"
+    }
+  }
+}
+{
+  "dataset": "Default",
+  "type": "add",
+  "body": {
+    "table": "Store",
+    "key": "42nd & Second - New York NY  (W)",
+    "values": {
+      "location": "40.750069,-73.973393"
+    }
+  }
+}
+{
+  "dataset": "Default",
+  "type": "add",
+  "body": {
+    "table": "Store",
+    "key": "Broadway @ 81st - New York NY  (W)",
+    "values": {
+      "location": "40.784972,-73.978987"
+    }
+  }
+}
+{
+  "dataset": "Default",
+  "type": "add",
+  "body": {
+    "table": "Store",
+    "key": "Fashion Inst of Technology - New York NY",
+    "values": {
+      "location": "40.746948,-73.994557"
+    }
+  }
+}
+~~~
+
+Open another terminal and send the json to the Droonga engine.
+
+Send `stores.jsons` as follows:
+
+~~~
+$ droonga-request stores.jsons
+Elapsed time: 0.01101195
+[
+  "droonga.message",
+  1393562553,
+  {
+    "inReplyTo": "1393562553.8918273",
+    "statusCode": 200,
+    "type": "add.result",
+    "body": true
+  }
+]
+...
+Elapsed time: 0.004817463
+[
+  "droonga.message",
+  1393562554,
+  {
+    "inReplyTo": "1393562554.2447524",
+    "statusCode": 200,
+    "type": "add.result",
+    "body": true
+  }
+]
+~~~
+
+Now a Droonga engine for searching Starbucks stores database is ready.
+
+### Send request with droonga-request
+
+Check if it is working. Create a query as a JSON file as follows.
+
+search-all-stores.json:
+
+~~~
+{
+  "dataset": "Default",
+  "type": "search",
+  "body": {
+    "queries": {
+      "stores": {
+        "source": "Store",
+        "output": {
+          "elements": [
+            "startTime",
+            "elapsedTime",
+            "count",
+            "attributes",
+            "records"
+          ],
+          "attributes": ["_key"],
+          "limit": -1
+        }
+      }
+    }
+  }
+}
+~~~
+
+Send the request to the Droonga Engine:
+
+~~~
+$ droonga-request search-all-stores.json
+Elapsed time: 0.008286785
+[
+  "droonga.message",
+  1393562604,
+  {
+    "inReplyTo": "1393562604.4970381",
+    "statusCode": 200,
+    "type": "search.result",
+    "body": {
+      "stores": {
+        "count": 40,
+        "records": [
+          [
+            "15th & Third - New York NY  (W)"
+          ],
+          [
+            "41st and Broadway - New York NY  (W)"
+          ],
+          [
+            "84th & Third Ave - New York NY  (W)"
+          ],
+          [
+            "Macy's 35th Street Balcony - New York NY"
+          ],
+          [
+            "Second @ 81st - New York NY  (W)"
+          ],
+          [
+            "52nd & Seventh - New York NY  (W)"
+          ],
+          [
+            "1585 Broadway (47th) - New York NY  (W)"
+          ],
+          [
+            "54th & Broadway - New York NY  (W)"
+          ],
+          [
+            "60th & Broadway-II - New York NY  (W)"
+          ],
+          [
+            "63rd & Broadway - New York NY  (W)"
+          ],
+          [
+            "2 Columbus Ave. - New York NY  (W)"
+          ],
+          [
+            "NY Plaza - New York NY  (W)"
+          ],
+          [
+            "2138 Broadway - New York NY  (W)"
+          ],
+          [
+            "Broadway @ 81st - New York NY  (W)"
+          ],
+          [
+            "76th & Second - New York NY  (W)"
+          ],
+          [
+            "2nd Ave. & 9th Street - New York NY"
+          ],
+          [
+            "150 E. 42nd Street - New York NY  (W)"
+          ],
+          [
+            "Macy's 6th Floor - Herald Square - New York NY  (W)"
+          ],
+          [
+            "Herald Square- Macy's - New York NY"
+          ],
+          [
+            "Macy's 5th Floor - Herald Square - New York NY  (W)"
+          ],
+          [
+            "Marriott Marquis - Lobby - New York NY"
+          ],
+          [
+            "85th & First - New York NY  (W)"
+          ],
+          [
+            "1656 Broadway - New York NY  (W)"
+          ],
+          [
+            "Limited Brands-NYC - New York NY"
+          ],
+          [
+            "2 Broadway - New York NY  (W)"
+          ],
+          [
+            "36th and Madison - New York NY  (W)"
+          ],
+          [
+            "125th St. btwn Adam Clayton & FDB - New York NY"
+          ],
+          [
+            "118th & Frederick Douglas Blvd. - New York NY  (W)"
+          ],
+          [
+            "Fashion Inst of Technology - New York NY"
+          ],
+          [
+            "1st Avenue & 75th St. - New York NY  (W)"
+          ],
+          [
+            "West 43rd and Broadway - New York NY  (W)"
+          ],
+          [
+            "80th & York - New York NY  (W)"
+          ],
+          [
+            "Columbus @ 67th - New York NY  (W)"
+          ],
+          [
+            "45th & Broadway - New York NY  (W)"
+          ],
+          [
+            "92nd & 3rd - New York NY  (W)"
+          ],
+          [
+            "165 Broadway - 1 Liberty - New York NY  (W)"
+          ],
+          [
+            "19th & 8th - New York NY  (W)"
+          ],
+          [
+            "195 Broadway - New York NY  (W)"
+          ],
+          [
+            "70th & Broadway - New York NY  (W)"
+          ],
+          [
+            "42nd & Second - New York NY  (W)"
+          ]
+        ]
+      }
+    }
+  }
+]
+~~~
+
+Now the store names are retrieved. The engine looks working correctly.
+Next, setup a protocol adapter for clients to accept search requests via HTTP.
+
+## Setup an HTTP Protocol Adapter
+
+Let's use the `droonga-http-server` as an HTTP protocol adapter.
+
+### Install the droonga-http-server
+
+Download the installation script and run it by `bash` as the root user:
+
+~~~
+# curl https://raw.githubusercontent.com/droonga/droonga-http-server/master/install.sh | \
+    bash
+...
+Installing droonga-http-server from npmjs.org...
+...
+Preparing the user...
+...
+Setting up the configuration directory...
+The droonga-engine service is detected on this node.
+The droonga-http-server is configured to be connected
+to this node (XXXXXXXX).
+This node is configured with a hostname XXXXXXXX.
+
+Registering droonga-http-server as a service...
+...
+Successfully installed droonga-http-server.
+~~~
+
+### Start and stop the `droonga-http-server` service
+
+The `droonga-http-server` service can be started via the `service` command:
+
+~~~
+# service droonga-http-server start
+~~~
+
+To stop it, you also have to use the `service` command:
+
+~~~
+# service droonga-http-server stop
+~~~
+
+After confirmation, start the `droonga-http-server` again.
+
+~~~
+# service droonga-engine start
+~~~
+
+### Search request via HTTP
+
+We're all set. Let's send a search request to the protocol adapter via HTTP. At first, try to get all records of the `Stores` table by a request like following. (Note: The `attributes=_key` parameter means "export the value of the column `_key` to the search result". If you don't set the parameter, each record returned in the `records` will become just a blank array. You can specify multiple column names by the delimiter `,`. For example `attributes=_key,location` will return both the primary key and the location for each record.)
+
+    $ curl "http://192.168.100.50:10041/tables/Store?attributes=_key&limit=-1"
+    {
+      "stores": {
+        "count": 40,
+        "records": [
+          [
+            "15th & Third - New York NY  (W)"
+          ],
+          [
+            "41st and Broadway - New York NY  (W)"
+          ],
+          [
+            "84th & Third Ave - New York NY  (W)"
+          ],
+          [
+            "Macy's 35th Street Balcony - New York NY"
+          ],
+          [
+            "Second @ 81st - New York NY  (W)"
+          ],
+          [
+            "52nd & Seventh - New York NY  (W)"
+          ],
+          [
+            "1585 Broadway (47th) - New York NY  (W)"
+          ],
+          [
+            "54th & Broadway - New York NY  (W)"
+          ],
+          [
+            "60th & Broadway-II - New York NY  (W)"
+          ],
+          [
+            "63rd & Broadway - New York NY  (W)"
+          ],
+          [
+            "2 Columbus Ave. - New York NY  (W)"
+          ],
+          [
+            "NY Plaza - New York NY  (W)"
+          ],
+          [
+            "2138 Broadway - New York NY  (W)"
+          ],
+          [
+            "Broadway @ 81st - New York NY  (W)"
+          ],
+          [
+            "76th & Second - New York NY  (W)"
+          ],
+          [
+            "2nd Ave. & 9th Street - New York NY"
+          ],
+          [
+            "150 E. 42nd Street - New York NY  (W)"
+          ],
+          [
+            "Macy's 6th Floor - Herald Square - New York NY  (W)"
+          ],
+          [
+            "Herald Square- Macy's - New York NY"
+          ],
+          [
+            "Macy's 5th Floor - Herald Square - New York NY  (W)"
+          ],
+          [
+            "Marriott Marquis - Lobby - New York NY"
+          ],
+          [
+            "85th & First - New York NY  (W)"
+          ],
+          [
+            "1656 Broadway - New York NY  (W)"
+          ],
+          [
+            "Limited Brands-NYC - New York NY"
+          ],
+          [
+            "2 Broadway - New York NY  (W)"
+          ],
+          [
+            "36th and Madison - New York NY  (W)"
+          ],
+          [
+            "125th St. btwn Adam Clayton & FDB - New York NY"
+          ],
+          [
+            "118th & Frederick Douglas Blvd. - New York NY  (W)"
+          ],
+          [
+            "Fashion Inst of Technology - New York NY"
+          ],
+          [
+            "1st Avenue & 75th St. - New York NY  (W)"
+          ],
+          [
+            "West 43rd and Broadway - New York NY  (W)"
+          ],
+          [
+            "80th & York - New York NY  (W)"
+          ],
+          [
+            "Columbus @ 67th - New York NY  (W)"
+          ],
+          [
+            "45th & Broadway - New York NY  (W)"
+          ],
+          [
+            "92nd & 3rd - New York NY  (W)"
+          ],
+          [
+            "165 Broadway - 1 Liberty - New York NY  (W)"
+          ],
+          [
+            "19th & 8th - New York NY  (W)"
+          ],
+          [
+            "195 Broadway - New York NY  (W)"
+          ],
+          [
+            "70th & Broadway - New York NY  (W)"
+          ],
+          [
+            "42nd & Second - New York NY  (W)"
+          ]
+        ]
+      }
+    }
+
+Because the `count` says `40`, you know there are all 40 records in the table. Search result records are returned as an array `records`.
+
+Next step, let's try more meaningful query. To search stores which contain "Columbus" in their name, give `Columbus` as the parameter `query`, and give `_key` as the parameter `match_to` which means the column to be searched. Then:
+
+    $ curl "http://192.168.100.50:10041/tables/Store?query=Columbus&match_to=_key&attributes=_key&limit=-1"
+    {
+      "stores": {
+        "count": 2,
+        "records": [
+          [
+            "Columbus @ 67th - New York NY  (W)"
+          ],
+          [
+            "2 Columbus Ave. - New York NY  (W)"
+          ]
+        ]
+      }
+    }
+
+As the result, two stores are found by the search condition.
+
+For more details of the Droonga HTTP Server, see the [reference manual][http-server].
+
+
+## Conclusion
+
+In this tutorial, you did setup both packages [droonga-engine][] and [droonga-http-server][] which construct [Droonga][] service on a [Ubuntu Linux][Ubuntu] or [CentOS][] computer.
+Moreover, you built a search system based on an HTTP protocol adapter with a Droonga engine, and successfully searched.
+
+
+  [http-server]: ../../reference/http-server/
+  [Ubuntu]: http://www.ubuntu.com/
+  [CentOS]: https://www.centos.org/
+  [Droonga]: https://droonga.org/
+  [droonga-engine]: https://github.com/droonga/droonga-engine
+  [droonga-http-server]: https://github.com/droonga/droonga-http-server
+  [Groonga]: http://groonga.org/
+  [Ruby]: http://www.ruby-lang.org/
+  [nvm]: https://github.com/creationix/nvm
+  [Socket.IO]: http://socket.io/
+  [Fluentd]: http://fluentd.org/
+  [Node.js]: http://nodejs.org/

  Added: tutorial/1.1.0/benchmark/index.md (+803 -0) 100644
===================================================================
--- /dev/null
+++ tutorial/1.1.0/benchmark/index.md    2014-11-30 23:20:40 +0900 (9b8d38c)
@@ -0,0 +1,803 @@
+---
+title: "How to benchmark Droonga with Groonga?"
+layout: en
+---
+
+* TOC
+{:toc}
+
+<!--
+this is based on https://github.com/droonga/presentation-droonga-meetup-1-introduction/blob/master/benchmark/README.md
+-->
+
+## The goal of this tutorial
+
+Learning steps to benchmark a [Droonga][] cluster and compare it to a [Groonga][groonga] server.
+
+## Precondition
+
+* You must have basic knowledge and experiences to set up and operate an [Ubuntu][] or [CentOS][] Server.
+* You must have basic knowledge and experiences to use the [Groonga][groonga] via HTTP.
+* You must have basic knowledge to construct a [Droonga][] cluster.
+  Please complete the ["getting started" tutorial](../groonga/) before this.
+
+## Why benchmarking?
+
+Because Droonga has compatibility to Groonga, you'll plan to migrate your application based on Groonga to Droonga.
+Before that, you should benchmark Droonga and confirm that it is better alternative for your application.
+
+Of course you may simply hope to know the difference in performance between Groonga and Droonga.
+Benchmarking will make it clear.
+
+
+### How visualize the performance?
+
+There are two major indexes to indicate performance of a system.
+
+ * latency
+ * throughput
+
+Latency is the response time, actual elapsed time between two moments: when the system receives a request, and when it returns a response.
+In other words, for clients, it is the time to wait for each request.
+At this index, the smaller is the better.
+In general, latency becomes small for lightweight queries, small size database, or less clients.
+
+Throughput means how many request can be processed in a time.
+The performance index is described as "*queries per second* (*qps*)".
+For example, if a Groonga server processed 10 requests in one second, that is described as "10qps".
+Possibly there are 10 users (clients), or, there are 2 users and each user opens 5 tabs in his web browser.
+Anyway, "10qps" means that the Groonga actually accepted and responded for 10 requests while one second is passing.
+
+You can run benchmark with the command `drnbench-request-response`, introduced by the Gem package [drnbench]().
+It measures both latency and throughput of the target service.
+
+
+### How the benchmark tool measures the performance?
+
+`drnbench-request-response` benchmarks the target service, by steps like following:
+
+ 1. The master process generates one virtual client.
+    The client starts to send many requests to the target sequentially and frequently.
+ 2. After a while, the master process kills the client.
+    Then he calculates minimum, maximum, and average elapsed time, from response data.
+    And, he counts up the number of requests actually processed by the target, and reports it as "qps" of the single client case.
+ 3. The master process generates two virtual clients.
+    They starts to send requests.
+ 4. After a while, the master process kills all clients.
+    Then minimum, maximum, and average elapsed time is calculated, and total number of processed requests sent by all clients is reported as "qps" of the two clients case.
+ 5. Repeated with three clients, four clients ... and more progressively.
+ 6. Finally, the master process reports minimum/maximum/average elapsed time, "qps", and other extra information for each case, as a CSV file like:
+    
+    ~~~
+    n_clients,total_n_requests,queries_per_second,min_elapsed_time,max_elapsed_time,average_elapsed_time,200
+    1,996,33.2,0.001773766,0.238031643,0.019765581680722916,100.0
+    2,1973,65.76666666666667,0.001558398,0.272225481,0.020047345673086702,100.0
+    4,3559,118.63333333333334,0.001531184,0.39942581,0.023357554419499882,100.0
+    6,4540,151.33333333333334,0.001540704,0.501663069,0.042344890696916264,100.0
+    8,4247,141.56666666666666,0.001483995,0.577100609,0.045836844514480835,100.0
+    10,4466,148.86666666666667,0.001987089,0.604507078,0.06949704923846833,100.0
+    12,4500,150.0,0.001782343,0.612596799,0.06902839555222215,100.0
+    14,4183,139.43333333333334,0.001980711,0.60754769,0.1033681068718623,100.0
+    16,4519,150.63333333333333,0.00284654,0.653204575,0.09473386513387955,100.0
+    18,4362,145.4,0.002330049,0.640683693,0.12581190483929405,100.0
+    20,4228,140.93333333333334,0.003710795,0.662666076,0.1301649290901133,100.0
+    ~~~
+    
+    You can analyze it, draw a graph from it, and so on.
+    
+    (Note: Performance results fluctuate from various factors.
+    This is just an example on a specific version, specific environment.)
+
+### How read and analyze the result? {#how-to-analyze}
+
+Look at the result above.
+
+#### HTTP response statuses
+
+See the last columns named `200`.
+It means the percentage of HTTP response statuses.
+`200` is "OK", `0` is "timed out".
+If clients got `400`, `500` and other errors, they will be also reported.
+These information will help you to detect unexpected slow down.
+
+#### Latency
+
+Latency is easily analyzed - the smaller is the better.
+The minimum and average elapsed time becomes small if any cache system is working correctly on the target.
+The maximum time is affected by slow queries, system's page-in/page-out, unexpected errors, and so on.
+
+A graph of latency also reveals the maximum number of effectively acceptable connections in same time.
+
+![A graph of latency](/images/tutorial/benchmark/latency-groonga-1.0.8.png)
+
+This is a graph of `average_elapsed_time`.
+You'll see that the time is increased for over 4 clients.
+What it means?
+
+Groonga can process multiple requests completely parallelly, until the number of available processors.
+When the computer has 4 processors, the system can process 4 or less requests in same time, without extra latency.
+And, if more requests are sent, 5th and later requests will be processed after a preceding request is processed.
+The graph confirms that the logical limitation is true.
+
+#### Throughput
+
+A graph helps you to analyze throughput performance.
+
+![A graph of throughput](/images/tutorial/benchmark/throughput-groonga-1.0.8.png)
+
+You'll see that the "qps" stagnated around 150, for 6 or more clients.
+This means that the target service can process 150 requests in one second, at a maximum.
+
+In other words, we can describe the result as: 150qps is the maximum throughput performance of this system - generic performance of hardware, software, network, size of the database, queries, and more.
+If the number of requests for your service is growing up and it is going to reach the limit, you have to do something about it - optimize queries, replace the computer with more powerful one, and so on.
+
+#### Performance comparison
+
+Sending same request patterns to Groonga and Droonga, you can compare performance of each system.
+If Droonga has better performance, it will become good reason to migrate your service from Groogna to Droonga.
+
+Moreover, comparing multiple results from different number of Droogna nodes, you can analyze the cost-benefit performance for newly introduced nodes.
+
+
+## Prepare environments for benchmarking
+
+Assume that there are four [Ubuntu][] 14.04LTS servers for the new Droogna cluster and they can resolve their names each other:
+
+ * `192.168.100.50`, the host name is `node0`
+ * `192.168.100.51`, the host name is `node1`
+ * `192.168.100.52`, the host name is `node2`
+ * `192.168.100.53`, the host name is `node3`
+
+One is client, others are Droonga nodes.
+
+### Ensure an existing reference database (and the data source)
+
+If you have any existing service based on Groonga, it becomes the reference.
+Then you just have to dump all data in your Groonga database and load them to a new Droonga cluster.
+
+Otherwise - if you have no existing service, prepare a new reference database with much data for effective benchmark.
+The repository [wikipedia-search][] includes some helper scripts to construct your Groonga server (and Droonga cluster), with [Japanese Wikipedia](http://ja.wikipedia.org/) pages.
+
+So let's prepare a new Groonga database including Wikipedia pages, on the `node0`.
+
+ 1. Determine the size of the database.
+    You have to use good enough size database for benchmarking.
+    
+    * If it is too small, you'll see "too bad" benchmark result for Droonga, because the percentage of the Droonga's overhead becomes relatively too large.
+    * If it is too large, you'll see "too unstable" result because page-in and page-out of RAM will slow the performance down randomly.
+    * If RAM size of all nodes are different, you should determine the size of the database for the minimum size RAM.
+
+    For example, if there are three nodes `node0` (8GB RAM), `node1` (8GB RAM), and `node2` (6GB RAM), then the database should be smaller than 6GB.
+ 2. Set up the Groonga server, as instructed on [the installation guide](http://groonga.org/docs/install.html).
+    
+    ~~~
+    (on node0)
+    % sudo apt-get -y install software-properties-common
+    % sudo add-apt-repository -y universe
+    % sudo add-apt-repository -y ppa:groonga/ppa
+    % sudo apt-get update
+    % sudo apt-get -y install groonga
+    ~~~
+    
+    Then the Groonga becomes available.
+ 3. Download the archive of Wikipedia pages and convert it to a dump file for Groonga, with the rake task `data:convert:groonga:ja`.
+    You can specify the number of records (pages) to be converted via the environment variable `MAX_N_RECORDS` (default=5000).
+    
+    ~~~
+    (on node0)
+    % cd ~/
+    % git clone https://github.com/droonga/wikipedia-search.git
+    % cd wikipedia-search
+    % bundle install --path vendor/
+    % time (MAX_N_RECORDS=1500000 bundle exec rake data:convert:groonga:ja \
+                                    data/groonga/ja-pages.grn)
+    ~~~
+    
+    Because the archive is very large, downloading and data conversion may take time.
+    
+    After that, a dump file `~/wikipedia-search/data/groonga/ja-pages.grn` is there.
+    Create a new database and load the dump file to it.
+    This also may take more time:
+    
+    ~~~
+    (on node0)
+    % mkdir -p $HOME/groonga/db/
+    % groonga -n $HOME/groonga/db/db quit
+    % time (cat ~/wikipedia-search/config/groonga/schema.grn | groonga $HOME/groonga/db/db)
+    % time (cat ~/wikipedia-search/config/groonga/indexes.grn | groonga $HOME/groonga/db/db)
+    % time (cat ~/wikipedia-search/data/groonga/ja-pages.grn | groonga $HOME/groonga/db/db)
+    ~~~
+    
+    Note: number of records affects to the database size.
+    Just for information, my results are here:
+    
+     * 1.1GB database was constructed from 300000 records.
+       Data conversion took 17 min, data loading took 6 min.
+     * 4.3GB database was constructed from 1500000 records.
+       Data conversion took 53 min, data loading took 64 min.
+    
+ 4. Start the Groonga as an HTTP server.
+    
+    ~~~
+    (on node0)
+    % groonga -p 10041 -d --protocol http $HOME/groonga/db/db
+    ~~~
+
+OK, now we can use this node as the reference for benchmarking.
+
+
+### Set up a Droonga cluster
+
+Install Droonga to all nodes.
+Because we are benchmarking it via HTTP, you have to install both services `droonga-engine` and `droonga-http-server` for each node.
+
+~~~
+(on node0)
+% host=node0
+% curl https://raw.githubusercontent.com/droonga/droonga-engine/master/install.sh | \
+    sudo HOST=$host bash
+% curl https://raw.githubusercontent.com/droonga/droonga-http-server/master/install.sh | \
+    sudo ENGINE_HOST=$host HOST=$host PORT=10042 bash
+% sudo droonga-engine-catalog-generate \
+    --hosts=node0,node1,node2
+% sudo service droonga-engine start
+% sudo service droonga-http-server start
+~~~
+
+~~~
+(on node1)
+% host=node1
+...
+~~~
+
+~~~
+(on node2)
+% host=node2
+...
+~~~
+
+Note: to start `droonga-http-server` with a port number different from Groonga, we should specify another port `10042` via the `PORT` environment variable, like above.
+
+Make sure that Droonga's HTTP server is actualy listening the port `10042` and it is working as a cluster with three nodes:
+
+~~~
+(on node0)
+% sudo apt-get install -y jq
+% curl "http://node0:10042/droonga/system/status" | jq .
+{
+  "nodes": {
+    "node0:10031/droonga": {
+      "live": true
+    },
+    "node1:10031/droonga": {
+      "live": true
+    },
+    "node2:10031/droonga": {
+      "live": true
+    }
+  }
+}
+~~~
+
+
+### Synchronize data from Groonga to Droonga
+
+Next, prepare the Droonga database.
+
+You can generate messages for Droonga from Groonga's dump result, by the `grn2drn` command.
+Install `grn2drn` Gem package to activate the command, to the Groonga server computer.
+
+~~~
+(on node0)
+% sudo gem install grn2drn
+~~~
+
+And, the `grndump` command introduced as a part of `rroonga` Gem package provides ability to extract all data of an existing Groonga database, flexibly.
+If you are going to extract data from an existing Groonga server, you have to install `rroonga` before that.
+
+~~~
+(on Ubuntu server)
+% sudo apt-get -y install software-properties-common
+% sudo add-apt-repository -y universe
+% sudo add-apt-repository -y ppa:groonga/ppa
+% sudo apt-get update
+% sudo apt-get -y install libgroonga-dev
+% sudo gem install rroonga
+~~~
+
+~~~
+(on CentOS server)
+# rpm -ivh http://packages.groonga.org/centos/groonga-release-1.1.0-1.noarch.rpm
+# yum -y makecache
+# yum -y ruby-devel groonga-devel
+# gem install rroonga
+~~~
+
+Then dump schemas and data separately and load them to the Droonga cluster.
+
+~~~
+(on node0)
+% time (grndump --no-dump-tables $HOME/groonga/db/db | \
+          grn2drn | \
+          droonga-send --server=node0 \
+                       --report-throughput)
+% time (grndump --no-dump-schema --no-dump-indexes $HOME/groonga/db/db | \
+          grn2drn | \
+          droonga-send --server=node0 \
+                       --server=node1 \
+                       --server=node2 \
+                       --messages-per-second=100 \
+                       --report-throughput)
+~~~
+
+Note that you must send requests for schema and indexes to just one endpoint.
+Parallel sending of schema definition requests for multiple endpoints will break the database, because Droonga cannot sort schema changing commands sent to each node in parallel.
+
+To reduce traffic and system load, you should specify maximum number of inpouring messages per second by the `--messages-per-second` option.
+If too many messages rush into the Droonga cluster, they may overflow - Droonga may eat up the RAM and slow down the system.
+
+This may take much time.
+For example, with the option `--messages-per-second=100`, 1500000 records will be synchronized in about 4 hours (we can estimate the required time like: `150000 / 100 / 60 / 60`).
+
+After all, now you have two HTTP servers: Groonga HTTP server with the port `10041`, and Droonga HTTP Servers with the port `10042`.
+
+
+### Set up the client
+
+You must install the benchmark client to the computer.
+
+Assume that you use a computer `node3` as the client:
+
+~~~
+(on node3)
+% sudo apt-get update
+% sudo apt-get -y upgrade
+% sudo apt-get install -y ruby curl jq
+% sudo gem install drnbench
+~~~
+
+
+## Prepare request patterns
+
+Let's prepare request pattern files for benchmarking.
+
+### Determine the expected cache hit rate
+
+First, you have to determine the cache hit rate.
+
+If you have any existing service based on Groonga, you can get the actual cache hit rate of the Groonga database via `status` command, like:
+
+~~~
+% curl "http://node0:10041/d/status" | jq .
+[
+  [
+    0,
+    1412326645.19701,
+    3.76701354980469e-05
+  ],
+  {
+    "max_command_version": 2,
+    "alloc_count": 158,
+    "starttime": 1412326485,
+    "uptime": 160,
+    "version": "4.0.6",
+    "n_queries": 1000,
+    "cache_hit_rate": 0.5,
+    "command_version": 1,
+    "default_command_version": 1
+  }
+]
+~~~
+
+The cache hit rate appears as `"cache_hit_rate"`.
+`0.5` means 50%, then a half of responses are returned from cached results.
+
+If you have no existing service, you should assume that the cache hit rate becomes 50%.
+
+To measure and compare performance of Groonga and Droonga properly, you should prepare request patterns for benchmarking which make the cache hit rate near the actual rate.
+So, how do it?
+
+You can control the cache hit rate by the number of unique request patterns, calculated with the expression:
+`N = 100 / (cache hit rate)`, because Groonga and Droonga (`droonga-http-server`) cache 100 results at a maximum by default.
+When the expected cache hit rate is 50%, the number of unique requests is calculated as: `N = 100 / 0.5 = 200`
+
+Note: if the actual rate is near zero, the number of unique requests becomes too huge!
+For such case you should carry up the rate to 0.01 (1%) or something.
+
+
+### Format of request patterns file
+
+The format of the request patterns list for `drnbench-request-response` is the plain text, a list of request paths for the host.
+Here is a short example of requests for Groonga's `select` command:
+
+~~~
+/d/select?command_version=2&table=Pages&limit=10&match_columns=title&output_columns=title&query=AAA
+/d/select?command_version=2&table=Pages&limit=10&match_columns=title&output_columns=title&query=BBB
+...
+~~~
+
+If you have any existing service based on Groonga, the list should be generated from the actual access log, query log, and so on.
+Patterns similar to actual requests will measure performance of your system more effectively.
+To generate 200 unique request patterns, you just have to collect 200 unique paths from your log.
+
+Otherwise, you'll have to generate list of request paths from something.
+See the next section.
+
+### Prepare list of search terms
+
+To generate 200 unique request patterns, you have to prepare 200 terms.
+Moreover, all of terms must be effective search term for the Groonga database.
+If you use randomly generated terms (like `P2qyNJ9L`, `Hy4pLKc5`, `D5eftuTp`, ...), you won't get effective benchmark result, because "not found" results will be returned for most requests.
+
+So there is a utility command `drnbench-extract-searchterms`.
+It generates list of terms from Groonga's select result, like:
+
+~~~
+% curl "http://node0:10041/d/select?command_version=2&table=Pages&limit=10&output_columns=title" | \
+    drnbench-extract-searchterms
+title1
+title2
+title3
+...
+title10
+~~~
+
+`drnbench-extract-searchterms` extracts terms from the first column of records.
+To collect 200 effective search terms, you just have to give a select result with an option `limit=200`.
+
+
+### Generate request pattern file from given terms
+
+OK, let's generate request patterns by `drnbench-extract-searchterms`, from a select result.
+
+~~~
+% n_unique_requests=200
+% curl "http://node0:10041/d/select?command_version=2&table=Pages&limit=$n_unique_requests&output_columns=title" | \
+    drnbench-extract-searchterms --escape | \
+    sed -r -e "s;^;/d/select?command_version=2\&table=Pages\&limit=10\&match_columns=title,text\&output_columns=snippet_html(title),snippet_html(text),categories,_key\&query_flags=NONE\&sortby=title\&drilldown=categories\&drilldown_limit=10\&drilldown_output_columns=_id,_key,_nsubrecs\&drilldown_sortby=_nsubrecs\&query=;" \
+    > ./patterns.txt
+~~~
+
+Note:
+
+ * You must escape `&` in the sed script with prefixed backslash, like `\&`.
+ * You should specify the `--escape` option for `drnbench-extract-searchterms`.
+   It escapes characters unsafe for URI strings.
+ * You should specify `query_flags=NONE` as a part of parameters, if you use search terms by the `query` parameter.
+   It forces ignoring of special characters in the `query` parameter, to Groonga.
+   Otherwise you may see some errors from invalid queries.
+
+The generated file `patterns.txt` becomes like following:
+
+~~~
+/d/select?command_version=2&table=Pages&limit=10&match_columns=title,text&output_columns=snippet_html(title),snippet_html(text),categories,_key&query_flags=NONE&sortby=title&drilldown=categories&drilldown_limit=10&drilldown_output_columns=_id,_key,_nsubrecs&drilldown_sortby=_nsubrecs&query=AAA
+/d/select?command_version=2&table=Pages&limit=10&match_columns=title,text&output_columns=snippet_html(title),snippet_html(text),categories,_key&query_flags=NONE&sortby=title&drilldown=categories&drilldown_limit=10&drilldown_output_columns=_id,_key,_nsubrecs&drilldown_sortby=_nsubrecs&query=BBB
+...
+~~~
+
+
+## Run the benchmark
+
+OK, it's ready to run.
+Let's benchmark Groonga and Droonga.
+
+### Benchmark Groonga
+
+First, run benchmark for Groonga as the reference.
+Start Groonga's HTTP server before running, if you configured a node as a reference Groonga server and daemon is stopped.
+
+~~~
+(on node0)
+% groonga -p 10041 -d --protocol http $HOME/groonga/db/db
+~~~
+
+You can run benchmark with the command `drnbench-request-response`, like:
+
+~~~
+(on node3)
+% drnbench-request-response \
+    --step=2 \
+    --start-n-clients=0 \
+    --end-n-clients=20 \
+    --duration=30 \
+    --interval=10 \
+    --request-patterns-file=$PWD/patterns.txt \
+    --default-hosts=node0 \
+    --default-port=10041 \
+    --output-path=$PWD/groonga-result.csv
+~~~
+
+Important parameters are:
+
+ * `--step` is the number of virtual clients increased on each progress.
+ * `--start-n-clients` is the initial number of virtual clients.
+   Even if you specify `0`, initially one client is always generated.
+ * `--end-n-clients` is the maximum number of virtual clients.
+   Benchmark is performed progressively until the number of clients is reached to this limit.
+ * `--duration` is the duration of each benchmark.
+   This should be long enough to average out the result.
+   `30` (seconds) seems good for my case.
+ * `--interval` is the interval between each benchmark.
+   This should be long enough to finish previous benchmark.
+   `10` (seconds) seems good for my case.
+ * `--request-patterns-file` is the path to the pattern file.
+ * `--default-hosts` is the list of host names of target endpoints.
+   By specifying multiple hosts as a comma-separated list, you can simulate load balancing.
+ * `--default-port` is the port number of the target endpoint.
+ * `--output-path` is the path to the result file.
+   Statistics of all benchmarks is saved as a file at the location.
+
+While running, you should monitor the system status of the `node0`, by `top` or something.
+If the benchmark elicits Groonga's performance correctly, Groonga's process uses the CPU fully (for example, `400%` on a computer with 4 processors).
+Otherwise something wrong - for example, too narrow network, too low performance client.
+
+Then you'll get the reference result of the Groonga.
+
+To confirm the result is valid, check the response of the `status` command:
+
+~~~
+% curl "http://node0:10041/d/status" | jq .
+[
+  [
+    0,
+    1412326645.19701,
+    3.76701354980469e-05
+  ],
+  {
+    "max_command_version": 2,
+    "alloc_count": 158,
+    "starttime": 1412326485,
+    "uptime": 160,
+    "version": "4.0.6",
+    "n_queries": 1000,
+    "cache_hit_rate": 0.49,
+    "command_version": 1,
+    "default_command_version": 1
+  }
+]
+~~~
+
+Look at the value of `"cache_hit_rate"`.
+If it is far from the expected cache hit rate (ex. `0.5`), something wrong - for example, too few request patterns.
+Too high cache hit rate produces too high throughput unexpectedly.
+
+After that you should stop Groonga to release CPU and RAM resources, if it is running on a Droonga node.
+
+~~~
+(on node0)
+% pkill groonga
+~~~
+
+### Benchmark Droonga
+
+#### Benchmark Droonga with single node
+
+Before benchmarking, make your cluster with only one node.
+
+~~~
+(on node1, node2)
+% sudo service droonga-engine stop
+% sudo service droonga-http-server stop
+~~~
+
+~~~
+(on node0)
+% sudo droonga-engine-catalog-generate \
+    --hosts=node0
+% sudo service droonga-engine restart
+% sudo service droonga-http-server restart
+~~~
+
+To clear effects from previous benchmark, you should restart services before each test.
+
+
+After that the endpoint `node0` works as a Droonga cluster with single node.
+Make sure that only one node is actually detected:
+
+~~~
+(on node3)
+% curl "http://node0:10042/droonga/system/status" | jq .
+{
+  "nodes": {
+    "node0:10031/droonga": {
+      "live": true
+    }
+  }
+}
+~~~
+
+Run the benchmark.
+
+~~~
+(on node3)
+% drnbench-request-response \
+    --step=2 \
+    --start-n-clients=0 \
+    --end-n-clients=20 \
+    --duration=30 \
+    --interval=10 \
+    --request-patterns-file=$PWD/patterns.txt \
+    --default-hosts=node0 \
+    --default-port=10042 \
+    --output-path=$PWD/droonga-result-1node.csv
+~~~
+
+Note that the default port is changed from `10041` (Groonga's HTTP server) to `10042` (Droonga).
+Moreover, the path to the result file also changed.
+
+While running, you should monitor the system status of the `node0`, by `top` or something.
+It may help you to analyze what is the bottleneck.
+
+And, to confirm the result is valid, you should check the actual cache hit rate:
+
+~~~
+% curl "http://node0:10042/statistics/cache" | jq .
+{
+  "hitRatio": 49.830717830807124,
+  "nHits": 66968,
+  "nGets": 134391
+}
+~~~
+
+Look at the value of `"hitRatio"`.
+Actual cache hit rate of the HTTP server is reported in percentage like above (the value `49.830717830807124` means `49.830717830807124%`.)
+If it is far from the expected cache hit rate, something wrong.
+
+#### Benchmark Droonga with two nodes
+
+Before benchmarking, join the second node to the cluster.
+
+~~~
+(on node0, node1)
+% sudo droonga-engine-catalog-generate \
+    --hosts=node0,node1
+% sudo service droonga-engine restart
+% sudo service droonga-http-server restart
+~~~
+
+After that both endpoints `node0` and `node1` work as a Droonga cluster with two nodes.
+Make sure that two nodes are actually detected:
+
+~~~
+(on node3)
+% curl "http://node0:10042/droonga/system/status" | jq .
+{
+  "nodes": {
+    "node0:10031/droonga": {
+      "live": true
+    },
+    "node1:10031/droonga": {
+      "live": true
+    }
+  }
+}
+~~~
+
+Run the benchmark.
+
+~~~
+(on node3)
+% drnbench-request-response \
+    --step=2 \
+    --start-n-clients=0 \
+    --end-n-clients=20 \
+    --duration=30 \
+    --interval=10 \
+    --request-patterns-file=$PWD/patterns.txt \
+    --default-hosts=node0,node1 \
+    --default-port=10042 \
+    --output-path=$PWD/droonga-result-2nodes.csv
+~~~
+
+Note that two hosts are specified via the `--default-hosts` option.
+
+If you send all requests to single endpoint, `droonga-http-server` will become a bottleneck, because it works as a single process for now.
+Moreover, `droonga-http-server` and `droonga-engine` will scramble for CPU resources.
+To measure the performance of your Droonga cluster effectively, you should average out CPU load per capita.
+
+Of course, on the production environment, it should be done by a load balancer, but It's a hassle to set up a load balancer for just benchmarking.
+Instead, you can specify multiple endpoint host names as a comma-separated list for the `--default-hosts` option.
+
+And, the path to the result file also changed.
+
+Don't forget to monitor system status of both nodes while benchmarking.
+If only one node is busy and another is idling, something wrong - for example, they are not working as a cluster.
+You also must check the actual cache hit rate of all nodes.
+
+#### Benchmark Droonga with three nodes
+
+Before benchmarking, join the last node to the cluster.
+
+~~~
+(on node0, node1)
+% sudo droonga-engine-catalog-generate \
+    --hosts=node0,node1,node2
+% sudo service droonga-engine restart
+% sudo service droonga-http-server restart
+~~~
+
+After that all endpoints `node0`, `node1`, and `node2` work as a Droonga cluster with three nodes.
+Make sure that three nodes are actually detected:
+
+~~~
+(on node3)
+% curl "http://node0:10042/droonga/system/status" | jq .
+{
+  "nodes": {
+    "node0:10031/droonga": {
+      "live": true
+    },
+    "node1:10031/droonga": {
+      "live": true
+    },
+    "node2:10031/droonga": {
+      "live": true
+    }
+  }
+}
+~~~
+
+Run the benchmark.
+
+~~~
+(on node3)
+% drnbench-request-response \
+    --step=2 \
+    --start-n-clients=0 \
+    --end-n-clients=20 \
+    --duration=30 \
+    --interval=10 \
+    --request-patterns-file=$PWD/patterns.txt \
+    --default-hosts=node0,node1,node2 \
+    --default-port=10042 \
+    --output-path=$PWD/droonga-result-3nodes.csv
+~~~
+
+Note that both `--default-hosts` and `--output-path` are changed again.
+Monitoring of system status and checking cache hit rate of all nodes are also important.
+
+## Analyze the result
+
+OK, now you have four results:
+
+ * `groonga-result.csv`
+ * `droonga-result-1node.csv`
+ * `droonga-result-2nodes.csv`
+ * `droonga-result-3nodes.csv`
+
+[As described](#how-to-analyze), you can analyze them.
+
+For example, you can plot a graph from these results like:
+
+![A layered graph of latency](/images/tutorial/benchmark/latency-mixed-1.0.8.png)
+
+You can explain this graph of latency as:
+
+ * Minimum latency of Droonga is larger than Groonga.
+   There are some overhead in Droonga.
+ * Latency of multiple nodes Droonga slowly increases than Groonga.
+   Droonga can process more requests in same time without extra waiting time.
+
+![A layered graph of throughput](/images/tutorial/benchmark/throughput-mixed-1.0.8.png)
+
+You can explain this graph of throughput as:
+
+ * Graphs of Groonga and single node Droonga are alike.
+   There is less performance loss between Groonga and Droonga.
+ * Maximum throughput of Droonga is incdeased by number of nodes.
+
+(Note: Performance results fluctuate from various factors.
+This graph is just an example on a specific version, specific environment.)
+
+## Conclusion
+
+In this tutorial, you did prepare a reference [Groonga][] server and [Droonga][] cluster.
+And, you studied how to prepare request patterns, how measure your systems, and how analyze the result.
+
+  [Ubuntu]: http://www.ubuntu.com/
+  [CentOS]: https://www.centos.org/
+  [Droonga]: https://droonga.org/
+  [Groonga]: http://groonga.org/
+  [drnbench]: https://github.com/droonga/drnbench/
+  [wikipedia-search]: https://github.com/droonga/wikipedia-search/
+  [command reference]: ../../reference/commands/

  Added: tutorial/1.1.0/dump-restore/index.md (+577 -0) 100755
===================================================================
--- /dev/null
+++ tutorial/1.1.0/dump-restore/index.md    2014-11-30 23:20:40 +0900 (975e531)
@@ -0,0 +1,577 @@
+---
+title: "Droonga tutorial: How to backup and restore the database?"
+layout: en
+---
+
+* TOC
+{:toc}
+
+## The goal of this tutorial
+
+Learning steps to backup and restore data by your hand.
+
+## Precondition
+
+* You must have an existing [Droonga][] cluster with some data.
+  Please complete the ["getting started" tutorial](../groonga/) before this.
+
+This tutorial assumes that there are two existing Droonga nodes prepared by the [previous tutorial](../groonga/): `node0` (`192.168.100.50`) and `node1` (`192.168.100.51`), and there is another computer `node2` (`192.168.100.52`) as a working environment.
+If you have Droonga nodes with other names, read `node0`, `node1` and `node2` in following descriptions as yours.
+
+## Backup data in a Droonga cluster
+
+### Install `drndump`
+
+First, install a command line tool named `drndump` via rubygems, to the working machine `node2`:
+
+~~~
+# gem install drndump
+~~~
+
+After that, establish that the `drndump` command has been installed successfully:
+
+~~~
+$ drndump --version
+drndump 1.0.0
+~~~
+
+### Dump all data in a Droonga cluster
+
+The `drndump` command extracts all schema and data as JSONs.
+Let's dump contents of existing your Droonga cluster.
+
+For example, if your cluster is constructed from two nodes `node0` (`192.168.100.50`) and `node1` (`192.168.100.51`), and now you are logged in to new another computer `node2` (`192.168.100.52`). then the command line is:
+
+~~~
+# drndump --host=node0 \
+           --receiver-host=node2
+{
+  "type": "table_create",
+  "dataset": "Default",
+  "body": {
+    "name": "Location",
+    "flags": "TABLE_PAT_KEY",
+    "key_type": "WGS84GeoPoint"
+  }
+}
+...
+{
+  "dataset": "Default",
+  "body": {
+    "table": "Store",
+    "key": "store9",
+    "values": {
+      "location": "146702531x-266363233",
+      "name": "Macy's 6th Floor - Herald Square - New York NY  (W)"
+    }
+  },
+  "type": "add"
+}
+{
+  "type": "column_create",
+  "dataset": "Default",
+  "body": {
+    "table": "Location",
+    "name": "store",
+    "type": "Store",
+    "flags": "COLUMN_INDEX",
+    "source": "location"
+  }
+}
+{
+  "type": "column_create",
+  "dataset": "Default",
+  "body": {
+    "table": "Term",
+    "name": "store_name",
+    "type": "Store",
+    "flags": "COLUMN_INDEX|WITH_POSITION",
+    "source": "name"
+  }
+}
+~~~
+
+Note to these things:
+
+ * You must specify valid host name of one of nodes in the cluster, via the option `--host`.
+ * You must specify valid host name or IP address of the computer you are logged in, via the option `--receiver-host`.
+   It is used by the Droonga cluster, to send response messages.
+ * The result includes complete commands to construct a dataset, same to the source.
+
+The result is printed to the standard output.
+To save it as a JSONs file, you'll use a redirection like:
+
+~~~
+$ drndump --host=node0 \
+          --receiver-host=node2 \
+    > dump.jsons
+~~~
+
+
+## Restore data to a Droonga cluster
+
+### Install `droonga-client`
+
+The result of `drndump` command is a list of Droonga messages.
+
+You need to use `droonga-send` command to send it to your Droogna cluster.
+Install the command included in the package `droonga-client`, via rubygems, to the working machine `node2`:
+
+~~~
+# gem install droonga-client
+~~~
+
+After that, establish that the `droonga-send` command has been installed successfully:
+
+~~~
+$ droonga-send --version
+droonga-send 0.2.0
+~~~
+
+### Prepare an empty Droonga cluster
+
+Assume that there is an empty Droonga cluster constructed from two nodes `node0` (`192.168.100.50`) and `node1` (`192.168.100.51`), now you are logged in to the host `node2` (`192.168.100.52`), and there is a dump file `dump.jsons`.
+
+If you are reading this tutorial sequentially, you'll have an existing cluster and the dump file.
+Make it empty with these commands:
+
+~~~
+$ endpoint="http://node0:10041"
+$ curl "$endpoint/d/table_remove?name=Location" | jq "."
+[
+  [
+    0,
+    1406610703.2229023,
+    0.0010793209075927734
+  ],
+  true
+]
+$ curl "$endpoint/d/table_remove?name=Store" | jq "."
+[
+  [
+    0,
+    1406610708.2757723,
+    0.006396293640136719
+  ],
+  true
+]
+$ curl "$endpoint/d/table_remove?name=Term" | jq "."
+[
+  [
+    0,
+    1406610712.379644,
+    6.723403930664062e-05
+  ],
+  true
+]
+~~~
+
+After that the cluster becomes empty.
+Let's confirm it.
+You'll see empty result by `select` and `table_list` commands, like:
+
+~~~
+$ curl "$endpoint/d/table_list" | jq "."
+[
+  [
+    0,
+    1406610804.1535122,
+    0.0002875328063964844
+  ],
+  [
+    [
+      [
+        "id",
+        "UInt32"
+      ],
+      [
+        "name",
+        "ShortText"
+      ],
+      [
+        "path",
+        "ShortText"
+      ],
+      [
+        "flags",
+        "ShortText"
+      ],
+      [
+        "domain",
+        "ShortText"
+      ],
+      [
+        "range",
+        "ShortText"
+      ],
+      [
+        "default_tokenizer",
+        "ShortText"
+      ],
+      [
+        "normalizer",
+        "ShortText"
+      ]
+    ]
+  ]
+]
+$ curl -X DELETE "$endpoint/cache" | jq "."
+true
+$ curl "$endpoint/d/select?table=Store&output_columns=name&limit=10" | jq "."
+[
+  [
+    0,
+    1401363465.610241,
+    0
+  ],
+  [
+    [
+      [
+        null
+      ],
+      []
+    ]
+  ]
+]
+~~~
+
+Note, clear the response cache before sending a request for the `select` command.
+Otherwise you'll see unexpected cached result based on old configurations.
+
+Response caches are stored for recent 100 requests, and their lifetime is 1 minute, by default.
+You can clear all response caches manually by sending an HTTP `DELETE` request to the path `/cache`, like above.
+
+### Restore data from a dump result, to an empty Droonga cluster
+
+Because the result of the `drndump` command includes complete information to construct a dataset same to the source, you can re-construct your cluster from a dump file, even if the cluster is broken.
+You just have to pour the contents of the dump file to an empty cluster, by the `droonga-send` command.
+
+To restore the cluster from the dump file, run a command line like:
+
+~~~
+$ droonga-send --server=node0  \
+                    dump.jsons
+~~~
+
+Note:
+
+ * You must specify valid host name or IP address of one of nodes in the cluster, via the option `--server`.
+
+Then the data is completely restored. Confirm it:
+
+~~~
+$ curl -X DELETE "$endpoint/cache" | jq "."
+true
+$ curl "$endpoint/d/select?table=Store&output_columns=name&limit=10" | jq "."
+[
+  [
+    0,
+    1401363556.0294158,
+    7.62939453125e-05
+  ],
+  [
+    [
+      [
+        40
+      ],
+      [
+        [
+          "name",
+          "ShortText"
+        ]
+      ],
+      [
+        "1st Avenue & 75th St. - New York NY  (W)"
+      ],
+      [
+        "76th & Second - New York NY  (W)"
+      ],
+      [
+        "Herald Square- Macy's - New York NY"
+      ],
+      [
+        "Macy's 5th Floor - Herald Square - New York NY  (W)"
+      ],
+      [
+        "80th & York - New York NY  (W)"
+      ],
+      [
+        "Columbus @ 67th - New York NY  (W)"
+      ],
+      [
+        "45th & Broadway - New York NY  (W)"
+      ],
+      [
+        "Marriott Marquis - Lobby - New York NY"
+      ],
+      [
+        "Second @ 81st - New York NY  (W)"
+      ],
+      [
+        "52nd & Seventh - New York NY  (W)"
+      ]
+    ]
+  ]
+]
+~~~
+
+## Duplicate an existing Droonga cluster to another empty cluster directly
+
+If you have multiple Droonga clusters, then you can duplicate one to another.
+For this purpose, the package `droonga-engine` includes a utility command `droonga-engine-absorb-data`.
+It copies all data from an existing cluster to another one directly, so it is recommended if you don't need to save dump file locally.
+
+### Prepare multiple Droonga clusters
+
+Assume that there are two clusters: the source has a node `node0` (`192.168.100.50`), and the destination has a node `node1' (`192.168.100.51`).
+
+If you are reading this tutorial sequentially, you'll have an existing cluster with two nodes.
+Construct two clusters by `droonga-engine-catalog-modify` and make one cluster empty, with these commands:
+
+~~~
+(on node0)
+# droonga-engine-catalog-modify --replica-hosts=node0
+~~~
+
+~~~
+(on node1)
+# droonga-engine-catalog-modify --replica-hosts=node1
+$ endpoint="http://node1:10041"
+$ curl "$endpoint/d/table_remove?name=Location"
+$ curl "$endpoint/d/table_remove?name=Store"
+$ curl "$endpoint/d/table_remove?name=Term"
+~~~
+
+After that there are two clusters: one contains `node0` with data, another contains `node1` with no data. Confirm it:
+
+
+~~~
+$ curl "http://node0:10041/droonga/system/status" | jq "."
+{
+  "nodes": {
+    "node0:10031/droonga": {
+      "live": true
+    }
+  }
+}
+$ curl -X DELETE "http://node0:10041/cache" | jq "."
+true
+$ curl "http://node0:10041/d/select?table=Store&output_columns=name&limit=10" | jq "."
+[
+  [
+    0,
+    1401363556.0294158,
+    7.62939453125e-05
+  ],
+  [
+    [
+      [
+        40
+      ],
+      [
+        [
+          "name",
+          "ShortText"
+        ]
+      ],
+      [
+        "1st Avenue & 75th St. - New York NY  (W)"
+      ],
+      [
+        "76th & Second - New York NY  (W)"
+      ],
+      [
+        "Herald Square- Macy's - New York NY"
+      ],
+      [
+        "Macy's 5th Floor - Herald Square - New York NY  (W)"
+      ],
+      [
+        "80th & York - New York NY  (W)"
+      ],
+      [
+        "Columbus @ 67th - New York NY  (W)"
+      ],
+      [
+        "45th & Broadway - New York NY  (W)"
+      ],
+      [
+        "Marriott Marquis - Lobby - New York NY"
+      ],
+      [
+        "Second @ 81st - New York NY  (W)"
+      ],
+      [
+        "52nd & Seventh - New York NY  (W)"
+      ]
+    ]
+  ]
+]
+$ curl "http://node1:10041/droonga/system/status" | jq "."
+{
+  "nodes": {
+    "node1:10031/droonga": {
+      "live": true
+    }
+  }
+}
+$ curl -X DELETE "http://node1:10041/cache" | jq "."
+true
+$ curl "http://node1:10041/d/select?table=Store&output_columns=name&limit=10" | jq "."
+[
+  [
+    0,
+    1401363465.610241,
+    0
+  ],
+  [
+    [
+      [
+        null
+      ],
+      []
+    ]
+  ]
+]
+~~~
+
+Note, `droonga-http-server` is associated to the `droonga-engine` working on same computer.
+After you split the cluster like above, `droonga-http-server` on `node0` communicates only with `droonga-engine` on `node0`, `droonga-http-server` on `node1` communicates only with `droonga-engine` on `node1`.
+See also the next tutorial for more details.
+
+
+### Duplicate data between two Droonga clusters
+
+To copy data between two clusters, run the `droonga-engine-absorb-data` command on a node, like:
+
+~~~
+(on node1)
+$ droonga-engine-absorb-data --source-host=node0 \
+                             --destination-host=node1 \
+                             --receiver-host=node1
+Start to absorb data from node0
+                       to node1
+                      via node1 (this host)
+  dataset = Default
+  port    = 10031
+  tag     = droonga
+
+Absorbing...
+...
+Done.
+~~~
+
+You can run the command on different node, like:
+
+~~~
+(on node2)
+$ droonga-engine-absorb-data --source-host=node0 \
+                             --destination-host=node1 \
+                             --receiver-host=node2
+Start to absorb data from node0
+                       to node1
+                      via node2 (this host)
+...
+~~~
+
+Note that you must specify the host name (or the IP address) of the working machine via the `--receiver-host` option.
+
+After that contents of these two clusters are completely synchronized. Confirm it:
+
+~~~
+$ curl -X DELETE "http://node1:10041/cache" | jq "."
+true
+$ curl "http://node1:10041/d/select?table=Store&output_columns=name&limit=10" | jq "."
+[
+  [
+    0,
+    1401363556.0294158,
+    7.62939453125e-05
+  ],
+  [
+    [
+      [
+        40
+      ],
+      [
+        [
+          "name",
+          "ShortText"
+        ]
+      ],
+      [
+        "1st Avenue & 75th St. - New York NY  (W)"
+      ],
+      [
+        "76th & Second - New York NY  (W)"
+      ],
+      [
+        "Herald Square- Macy's - New York NY"
+      ],
+      [
+        "Macy's 5th Floor - Herald Square - New York NY  (W)"
+      ],
+      [
+        "80th & York - New York NY  (W)"
+      ],
+      [
+        "Columbus @ 67th - New York NY  (W)"
+      ],
+      [
+        "45th & Broadway - New York NY  (W)"
+      ],
+      [
+        "Marriott Marquis - Lobby - New York NY"
+      ],
+      [
+        "Second @ 81st - New York NY  (W)"
+      ],
+      [
+        "52nd & Seventh - New York NY  (W)"
+      ]
+    ]
+  ]
+]
+~~~
+
+### Unite two Droonga clusters
+
+Run following command lines to unite these two clusters:
+
+~~~
+(on node0)
+# droonga-engine-catalog-modify --add-replica-hosts=node1
+~~~
+
+~~~
+(on node1)
+# droonga-engine-catalog-modify --add-replica-hosts=node0
+~~~
+
+After that there is just one cluster - yes, it's the initial state.
+
+~~~
+$ curl "http://node0:10041/droonga/system/status" | jq "."
+{
+  "nodes": {
+    "node0:10031/droonga": {
+      "live": true
+    },
+    "node1:10031/droonga": {
+      "live": true
+    }
+  }
+}
+~~~
+
+## Conclusion
+
+In this tutorial, you did backup a [Droonga][] cluster and restore the data.
+Moreover, you did duplicate contents of an existing Droogna cluster to another empty cluster.
+
+Next, let's learn [how to add a new replica to an existing Droonga cluster](../add-replica/).
+
+  [Ubuntu]: http://www.ubuntu.com/
+  [Droonga]: https://droonga.org/
+  [Groonga]: http://groonga.org/
+  [command reference]: ../../reference/commands/

  Added: tutorial/1.1.0/groonga/index.md (+971 -0) 100755
===================================================================
--- /dev/null
+++ tutorial/1.1.0/groonga/index.md    2014-11-30 23:20:40 +0900 (c2596b8)
@@ -0,0 +1,971 @@
+---
+title: "Droonga tutorial: Getting started/How to migrate from Groonga?"
+layout: en
+---
+
+* TOC
+{:toc}
+
+## The goal of this tutorial
+
+Learning steps to run a Droonga cluster by your hand, and use it as a [Groonga][groonga] compatible server.
+
+## Precondition
+
+* You must have basic knowledge and experiences to set up and operate an [Ubuntu][] or [CentOS][] Server.
+* You must have basic knowledge and experiences to use the [Groonga][groonga] via HTTP.
+
+## What's Droonga?
+
+It is a data processing engine based on a distributed architecture, named after the terms "distributed-Groonga".
+As its name suggests, it can work as a Groonga compatible server with some improvements - replication and sharding.
+
+In a certain sense, the Droonga is quite different from Groonga, about its architecture, design, API etc.
+However, you don't have to understand the whole architecture of the Droonga, if you simply use it just as a Groonga compatible server.
+
+For example, let's try to build a database system to find [Starbucks stores in New York](http://geocommons.com/overlays/430038).
+
+## Set up a Droonga cluster
+
+A database system based on the Droonga is called *Droonga cluster*.
+This section describes how to set up a Droonga cluster from scratch.
+
+### Prepare computers for Droonga nodes
+
+A Droonga cluster is constructed from one or more computers, called *Droonga node*(s).
+Prepare computers for Droonga nodes at first.
+
+This tutorial describes steps to set up Droonga cluster based on existing computers.
+Following instructions are basically written for a successfully prepared virtual machine of the `Ubuntu 14.04 x64` or `CentOS 7 x64` on the service [DigitalOcean](https://www.digitalocean.com/), with an available console.
+
+If you just want to try Droong casually, see another tutorial: [how to prepare multiple virtual machines on your own computer](../virtual-machines-for-experiments/).
+
+NOTE:
+
+ * Make sure to use instances with >= 2GB memory equipped, at least during installation of required packages for Droonga.
+   Otherwise, you possibly experience a strange build error.
+ * Make sure the hostname reported by `hostname -f` or the IP address reported by `hostname -i` is accessible from each other computer in your cluster.
+ * Make sure that commands `curl` and `jq` are installed in your computers.
+   `curl` is required to download installation scripts.
+   `jq` is not required for installation, but it will help you to read response JSONs returned from Droonga.
+
+You need to prepare two or more nodes for effective replication.
+So this tutorial assumes that you have two computers:
+
+ * has an IP address `192.168.100.50`, with a host name `node0`.
+ * has an IP address `192.168.100.51`, with a host name `node1`.
+
+### Set up computers as Droonga nodes
+
+Groonga provides binary packages and you can install Groonga easily, for some environments.
+(See: [how to install Groonga](http://groonga.org/docs/install.html))
+
+On the other hand, steps to set up a computer as a Droonga node are:
+
+ 1. Install the `droonga-engine`.
+ 2. Install the `droonga-http-server`.
+ 3. Configure the node to work together with other nodes.
+
+Note that you must do all steps on each computer.
+However, they're very simple.
+
+Let's log in to the computer `node0` (`192.168.100.50`), and install Droonga components.
+
+First, install the `droonga-engine`.
+It is the core component provides most features of Droonga system.
+Download the installation script and run it by `bash` as the root user:
+
+~~~
+# curl https://raw.githubusercontent.com/droonga/droonga-engine/master/install.sh | \
+    bash
+...
+Installing droonga-engine from RubyGems...
+...
+Preparing the user...
+...
+Setting up the configuration directory...
+This node is configured with a hostname XXXXXXXX.
+
+Registering droonga-engine as a service...
+...
+Successfully installed droonga-engine.
+~~~
+
+Note, The name of the node itself (guessed from the host name of the computer) appears in the message.
+*It is used in various situations*, so *don't forget what is the name of each node*.
+
+Second, install the `droonga-http-server`.
+It is the frontend component required to translate HTTP requests to Droonga's native one.
+Download the installation script and run it by `bash` as the root user:
+
+~~~
+# curl https://raw.githubusercontent.com/droonga/droonga-http-server/master/install.sh | \
+    bash
+...
+Installing droonga-http-server from npmjs.org...
+...
+Preparing the user...
+...
+Setting up the configuration directory...
+The droonga-engine service is detected on this node.
+The droonga-http-server is configured to be connected
+to this node (XXXXXXXX).
+This node is configured with a hostname XXXXXXXX.
+
+Registering droonga-http-server as a service...
+...
+Successfully installed droonga-http-server.
+~~~
+
+After that, do same operations on another computer `node1` (`192.168.100.51`) also.
+Then two computers successfully prepared to work as Droonga nodes.
+
+### When your computers don't have a host name accessible from other computers... {#accessible-host-name}
+
+Each Droonga node must know the *accessible host name* of the node itself, to communicate with other nodes.
+
+The installation script guesses accessible host name of the node automatically.
+You can confirm what value is detected as the host name of the node itself, by following command:
+
+~~~
+# cat ~droonga-engine/droonga/droonga-engine.yaml | grep host
+host: XXXXXXXX
+~~~
+
+However it may be misdetected if the computer is not configured properly.
+For example, even if a node is configured with a host name `node0`, it cannot receive any message from other nodes when others cannot resolve the name `node0` to actual IP address.
+
+Then you have to reconfigure your node with raw IP addresse of the node itself, like:
+
+~~~
+(on node0=192.168.100.50)
+# host=192.168.100.50
+# droonga-engine-configure --quiet --reset-config --reset-catalog \
+                           --host=$host
+# droonga-http-server-configure --quiet --reset-config \
+                                --droonga-engine-host-name=$host \
+                                --receive-host-name=$host
+
+(on node1=192.168.100.51)
+# host=192.168.100.51
+...
+~~~
+
+Then your computer `node0` is configured as a Droonga node with the host name `192.168.100.50`, and `node1` becomes a node with the name `192.168.100.51`.
+As said before, *the configured name is used in various situations*, so *don't forget what is the name of each node*.
+
+This tutorial assumes that all your computers can resolve each other host name `node0` and `node1` correctly.
+Otherwise, read host names `node0` and `node1` in following instructions, as raw IP addresses like `192.168.100.50` and `192.168.100.51`.
+
+By the way, you can specify your favorite value as the host name of the computer itself via environment variables, for the installation script, like:
+
+~~~
+(on node0=192.168.100.50)
+# host=192.168.100.50
+# curl https://raw.githubusercontent.com/droonga/droonga-engine/master/install.sh | \
+    HOST=$host bash
+# curl https://raw.githubusercontent.com/droonga/droonga-http-server/master/install.sh | \
+    ENGINE_HOST=$host HOST=$host bash
+
+(on node1=192.168.100.51)
+# host=192.168.100.51
+...
+~~~
+
+This option will help you, if you already know that your computers are not configured to resolve each other name.
+
+### Configure nodes to work together as a cluster
+
+Currently, these nodes are still individual nodes.
+Let's configure them to work together as a cluster.
+
+Run commands like this, on each node:
+
+~~~
+# droonga-engine-catalog-generate --hosts=node0,node1
+~~~
+
+Of course you must specify correct host name of nodes by the `--hosts` option.
+If your nodes are configured with raw IP addresses, the command line is:
+
+~~~
+# droonga-engine-catalog-generate --hosts=192.168.100.50,192.168.100.51
+~~~
+
+OK, now your Droonga cluster is correctly prepared.
+Two nodes are configured to work together as a Droonga cluster.
+
+Let's continue to [the next step, "how to use the cluster"](#use).
+
+
+## Use the Droonga cluster, via HTTP {#use}
+
+### Start and stop services on each Droonga node
+
+You can run Groonga as an HTTP server daemon with the option `-d`, like:
+
+~~~
+# groonga -p 10041 -d --protocol http /tmp/databases/db
+~~~
+
+On the other hand, you have to run multiple server daemons for each Droonga node to use your Droonga cluster via HTTP.
+
+If you set up your Droonga nodes by installation scripts, daemons are already been configured as system services managed via the `service` command.
+To start them, run commands like following on each Droonga node:
+
+~~~
+# service droonga-engine start
+# service droonga-http-server start
+~~~
+
+By these commands, services start to work.
+Now two nodes construct a cluster and they monitor each other.
+If one of nodes dies and there is any still alive node, survivor(s) will work as the Droonga cluster.
+Then you can recover the dead node and re-join it to the cluster secretly.
+
+Let's make sure that the cluster works, by a Droonga command, `system.status`.
+You can see the result via HTTP, like:
+
+~~~
+$ curl "http://node0:10041/droonga/system/status" | jq "."
+{
+  "nodes": {
+    "node0:10031/droonga": {
+      "live": true
+    },
+    "node1:10031/droonga": {
+      "live": true
+    }
+  }
+}
+~~~
+
+The result says that two nodes are working correctly.
+Because it is a cluster, another endpoint returns same result.
+
+~~~
+$ curl "http://node1:10041/droonga/system/status" | jq "."
+{
+  "nodes": {
+    "node0:10031/droonga": {
+      "live": true
+    },
+    "node1:10031/droonga": {
+      "live": true
+    }
+  }
+}
+~~~
+
+`droonga-http-server` connects to all `droonga-engine` in the cluster, and distributes requests to them like a load balancer.
+Moreover, even if some `droonga-engine`s stop, `droonga-http-server` wards off those dead engines automatically, and the cluster keeps itself correctly working.
+
+To stop services, run commands like following on each Droonga node:
+
+~~~
+# service droonga-engine stop
+# service droonga-http-server stop
+~~~
+
+After verification, start services again, on each Droonga node.
+
+### Create a table, columns, and indexes
+
+Now your Droonga cluster actually works as an HTTP server compatible to Groonga's HTTP server.
+
+Requests are completely same to ones for a Groonga server.
+To create a new table `Store`, you just have to send a GET request for the `table_create` command, like:
+
+~~~
+$ endpoint="http://node0:10041"
+$ curl "$endpoint/d/table_create?name=Store&flags=TABLE_PAT_KEY&key_type=ShortText" | jq "."
+[
+  [
+    0,
+    1401358896.360356,
+    0.0035653114318847656
+  ],
+  true
+]
+~~~
+
+Note that you have to specify the host, one of Droonga nodes with active droonga-http-server, in your Droonga cluster.
+In other words, you can use any favorite node in the cluster as an endpoint.
+All requests will be distributed to suitable nodes in the cluster.
+
+OK, now the table has been created successfully.
+Let's see it by the `table_list` command:
+
+~~~
+$ curl "$endpoint/d/table_list" | jq "."
+[
+  [
+    0,
+    1401358908.9126804,
+    0.001600027084350586
+  ],
+  [
+    [
+      [
+        "id",
+        "UInt32"
+      ],
+      [
+        "name",
+        "ShortText"
+      ],
+      [
+        "path",
+        "ShortText"
+      ],
+      [
+        "flags",
+        "ShortText"
+      ],
+      [
+        "domain",
+        "ShortText"
+      ],
+      [
+        "range",
+        "ShortText"
+      ],
+      [
+        "default_tokenizer",
+        "ShortText"
+      ],
+      [
+        "normalizer",
+        "ShortText"
+      ]
+    ],
+    [
+      256,
+      "Store",
+      "/home/vagrant/droonga/000/db.0000100",
+      "TABLE_PAT_KEY|PERSISTENT",
+      "ShortText",
+      null,
+      null,
+      null
+    ]
+  ]
+]
+~~~
+
+Because it is a cluster, another endpoint returns same result.
+
+~~~
+$ curl "http://node1:10041/d/table_list" | jq "."
+[
+  [
+    0,
+    1401358908.9126804,
+    0.001600027084350586
+  ],
+  [
+    [
+      [
+        "id",
+        "UInt32"
+      ],
+      [
+        "name",
+        "ShortText"
+      ],
+      [
+        "path",
+        "ShortText"
+      ],
+      [
+        "flags",
+        "ShortText"
+      ],
+      [
+        "domain",
+        "ShortText"
+      ],
+      [
+        "range",
+        "ShortText"
+      ],
+      [
+        "default_tokenizer",
+        "ShortText"
+      ],
+      [
+        "normalizer",
+        "ShortText"
+      ]
+    ],
+    [
+      256,
+      "Store",
+      "/home/vagrant/droonga/000/db.0000100",
+      "TABLE_PAT_KEY|PERSISTENT",
+      "ShortText",
+      null,
+      null,
+      null
+    ]
+  ]
+]
+~~~
+
+Next, create new columns `name` and `location` to the `Store` table by the `column_create` command, like:
+
+~~~
+$ curl "$endpoint/d/column_create?table=Store&name=name&flags=COLUMN_SCALAR&type=ShortText" | jq "."
+[
+  [
+    0,
+    1401358348.6541538,
+    0.0004096031188964844
+  ],
+  true
+]
+$ curl "$endpoint/d/column_create?table=Store&name=location&flags=COLUMN_SCALAR&type=WGS84GeoPoint" | jq "."
+[
+  [
+    0,
+    1401358359.084659,
+    0.002511262893676758
+  ],
+  true
+]
+~~~
+
+Create indexes also.
+
+~~~
+$ curl "$endpoint/d/table_create?name=Term&flags=TABLE_PAT_KEY&key_type=ShortText&default_tokenizer=TokenBigram&normalizer=NormalizerAuto" | jq "."
+[
+  [
+    0,
+    1401358475.7229664,
+    0.002419710159301758
+  ],
+  true
+]
+$ curl "$endpoint/d/column_create?table=Term&name=store_name&flags=COLUMN_INDEX|WITH_POSITION&type=Store&source=name" | jq "."
+[
+  [
+    0,
+    1401358494.1656318,
+    0.006799221038818359
+  ],
+  true
+]
+$ curl "$endpoint/d/table_create?name=Location&flags=TABLE_PAT_KEY&key_type=WGS84GeoPoint" | jq "."
+[
+  [
+    0,
+    1401358505.708896,
+    0.0016951560974121094
+  ],
+  true
+]
+$ curl "$endpoint/d/column_create?table=Location&name=store&flags=COLUMN_INDEX&type=Store&source=location" | jq "."
+[
+  [
+    0,
+    1401358519.6187897,
+    0.024788379669189453
+  ],
+  true
+]
+~~~
+
+Let's confirm results:
+
+~~~
+$ curl "$endpoint/d/table_list" | jq "."
+[
+  [
+    0,
+    1416390011.7194495,
+    0.0015704631805419922
+  ],
+  [
+    [
+      [
+        "id",
+        "UInt32"
+      ],
+      [
+        "name",
+        "ShortText"
+      ],
+      [
+        "path",
+        "ShortText"
+      ],
+      [
+        "flags",
+        "ShortText"
+      ],
+      [
+        "domain",
+        "ShortText"
+      ],
+      [
+        "range",
+        "ShortText"
+      ],
+      [
+        "default_tokenizer",
+        "ShortText"
+      ],
+      [
+        "normalizer",
+        "ShortText"
+      ]
+    ],
+    [
+      261,
+      "Location",
+      "/home/droonga-engine/droonga/databases/000/db.0000105",
+      "TABLE_PAT_KEY|PERSISTENT",
+      "WGS84GeoPoint",
+      null,
+      null,
+      null
+    ],
+    [
+      256,
+      "Store",
+      "/home/droonga-engine/droonga/databases/000/db.0000100",
+      "TABLE_PAT_KEY|PERSISTENT",
+      "ShortText",
+      null,
+      null,
+      null
+    ],
+    [
+      259,
+      "Term",
+      "/home/droonga-engine/droonga/databases/000/db.0000103",
+      "TABLE_PAT_KEY|PERSISTENT",
+      "ShortText",
+      null,
+      "TokenBigram",
+      "NormalizerAuto"
+    ]
+  ]
+]
+$ curl "$endpoint/d/column_list?table=Store" | jq "."
+[
+  [
+    0,
+    1416390069.515929,
+    0.001077413558959961
+  ],
+  [
+    [
+      [
+        "id",
+        "UInt32"
+      ],
+      [
+        "name",
+        "ShortText"
+      ],
+      [
+        "path",
+        "ShortText"
+      ],
+      [
+        "type",
+        "ShortText"
+      ],
+      [
+        "flags",
+        "ShortText"
+      ],
+      [
+        "domain",
+        "ShortText"
+      ],
+      [
+        "range",
+        "ShortText"
+      ],
+      [
+        "source",
+        "ShortText"
+      ]
+    ],
+    [
+      256,
+      "_key",
+      "",
+      "",
+      "COLUMN_SCALAR",
+      "Store",
+      "ShortText",
+      []
+    ],
+    [
+      258,
+      "location",
+      "/home/droonga-engine/droonga/databases/000/db.0000102",
+      "fix",
+      "COLUMN_SCALAR",
+      "Store",
+      "WGS84GeoPoint",
+      []
+    ],
+    [
+      257,
+      "name",
+      "/home/droonga-engine/droonga/databases/000/db.0000101",
+      "var",
+      "COLUMN_SCALAR",
+      "Store",
+      "ShortText",
+      []
+    ]
+  ]
+]
+$ curl "$endpoint/d/column_list?table=Term" | jq "."
+[
+  [
+    0,
+    1416390110.143951,
+    0.0013172626495361328
+  ],
+  [
+    [
+      [
+        "id",
+        "UInt32"
+      ],
+      [
+        "name",
+        "ShortText"
+      ],
+      [
+        "path",
+        "ShortText"
+      ],
+      [
+        "type",
+        "ShortText"
+      ],
+      [
+        "flags",
+        "ShortText"
+      ],
+      [
+        "domain",
+        "ShortText"
+      ],
+      [
+        "range",
+        "ShortText"
+      ],
+      [
+        "source",
+        "ShortText"
+      ]
+    ],
+    [
+      259,
+      "_key",
+      "",
+      "",
+      "COLUMN_SCALAR",
+      "Term",
+      "ShortText",
+      []
+    ],
+    [
+      260,
+      "store_name",
+      "/home/droonga-engine/droonga/databases/000/db.0000104",
+      "index",
+      "COLUMN_INDEX|WITH_POSITION",
+      "Term",
+      "Store",
+      [
+        "name"
+      ]
+    ]
+  ]
+]
+$ curl "$endpoint/d/column_list?table=Location" | jq "."
+[
+  [
+    0,
+    1416390163.0140722,
+    0.0009713172912597656
+  ],
+  [
+    [
+      [
+        "id",
+        "UInt32"
+      ],
+      [
+        "name",
+        "ShortText"
+      ],
+      [
+        "path",
+        "ShortText"
+      ],
+      [
+        "type",
+        "ShortText"
+      ],
+      [
+        "flags",
+        "ShortText"
+      ],
+      [
+        "domain",
+        "ShortText"
+      ],
+      [
+        "range",
+        "ShortText"
+      ],
+      [
+        "source",
+        "ShortText"
+      ]
+    ],
+    [
+      261,
+      "_key",
+      "",
+      "",
+      "COLUMN_SCALAR",
+      "Location",
+      "WGS84GeoPoint",
+      []
+    ],
+    [
+      262,
+      "store",
+      "/home/droonga-engine/droonga/databases/000/db.0000106",
+      "index",
+      "COLUMN_INDEX",
+      "Location",
+      "Store",
+      [
+        "location"
+      ]
+    ]
+  ]
+]
+~~~
+
+### Load data to a table
+
+Let's load data to the `Store` table.
+First, prepare the data as a JSON file `stores.json`.
+
+stores.json:
+
+~~~
+[
+["_key","name","location"],
+["store0","1st Avenue & 75th St. - New York NY  (W)","40.770262,-73.954798"],
+["store1","76th & Second - New York NY  (W)","40.771056,-73.956757"],
+["store2","2nd Ave. & 9th Street - New York NY","40.729445,-73.987471"],
+["store3","15th & Third - New York NY  (W)","40.733946,-73.9867"],
+["store4","41st and Broadway - New York NY  (W)","40.755111,-73.986225"],
+["store5","84th & Third Ave - New York NY  (W)","40.777485,-73.954979"],
+["store6","150 E. 42nd Street - New York NY  (W)","40.750784,-73.975582"],
+["store7","West 43rd and Broadway - New York NY  (W)","40.756197,-73.985624"],
+["store8","Macy's 35th Street Balcony - New York NY","40.750703,-73.989787"],
+["store9","Macy's 6th Floor - Herald Square - New York NY  (W)","40.750703,-73.989787"],
+["store10","Herald Square- Macy's - New York NY","40.750703,-73.989787"],
+["store11","Macy's 5th Floor - Herald Square - New York NY  (W)","40.750703,-73.989787"],
+["store12","80th & York - New York NY  (W)","40.772204,-73.949862"],
+["store13","Columbus @ 67th - New York NY  (W)","40.774009,-73.981472"],
+["store14","45th & Broadway - New York NY  (W)","40.75766,-73.985719"],
+["store15","Marriott Marquis - Lobby - New York NY","40.759123,-73.984927"],
+["store16","Second @ 81st - New York NY  (W)","40.77466,-73.954447"],
+["store17","52nd & Seventh - New York NY  (W)","40.761829,-73.981141"],
+["store18","1585 Broadway (47th) - New York NY  (W)","40.759806,-73.985066"],
+["store19","85th & First - New York NY  (W)","40.776101,-73.949971"],
+["store20","92nd & 3rd - New York NY  (W)","40.782606,-73.951235"],
+["store21","165 Broadway - 1 Liberty - New York NY  (W)","40.709727,-74.011395"],
+["store22","1656 Broadway - New York NY  (W)","40.762434,-73.983364"],
+["store23","54th & Broadway - New York NY  (W)","40.764275,-73.982361"],
+["store24","Limited Brands-NYC - New York NY","40.765219,-73.982025"],
+["store25","19th & 8th - New York NY  (W)","40.743218,-74.000605"],
+["store26","60th & Broadway-II - New York NY  (W)","40.769196,-73.982576"],
+["store27","63rd & Broadway - New York NY  (W)","40.771376,-73.982709"],
+["store28","195 Broadway - New York NY  (W)","40.710703,-74.009485"],
+["store29","2 Broadway - New York NY  (W)","40.704538,-74.01324"],
+["store30","2 Columbus Ave. - New York NY  (W)","40.769262,-73.984764"],
+["store31","NY Plaza - New York NY  (W)","40.702802,-74.012784"],
+["store32","36th and Madison - New York NY  (W)","40.748917,-73.982683"],
+["store33","125th St. btwn Adam Clayton & FDB - New York NY","40.808952,-73.948229"],
+["store34","70th & Broadway - New York NY  (W)","40.777463,-73.982237"],
+["store35","2138 Broadway - New York NY  (W)","40.781078,-73.981167"],
+["store36","118th & Frederick Douglas Blvd. - New York NY  (W)","40.806176,-73.954109"],
+["store37","42nd & Second - New York NY  (W)","40.750069,-73.973393"],
+["store38","Broadway @ 81st - New York NY  (W)","40.784972,-73.978987"],
+["store39","Fashion Inst of Technology - New York NY","40.746948,-73.994557"]
+]
+~~~
+
+Then, send it as a POST request of the `load` command, like:
+
+~~~
+$ curl --data "@stores.json" "$endpoint/d/load?table=Store" | jq "."
+[
+  [
+    0,
+    1401358564.909,
+    0.158
+  ],
+  [
+    40
+  ]
+]
+~~~
+
+Now all data in the JSON file are successfully loaded.
+
+### Select data from a table
+
+OK, all data is now ready.
+
+As the starter, let's select initial ten records with the `select` command:
+
+~~~
+$ curl "$endpoint/d/select?table=Store&output_columns=name&limit=10" | jq "."
+[
+  [
+    0,
+    1401362059.7437818,
+    4.935264587402344e-05
+  ],
+  [
+    [
+      [
+        40
+      ],
+      [
+        [
+          "name",
+          "ShortText"
+        ]
+      ],
+      [
+        "1st Avenue & 75th St. - New York NY  (W)"
+      ],
+      [
+        "76th & Second - New York NY  (W)"
+      ],
+      [
+        "Herald Square- Macy's - New York NY"
+      ],
+      [
+        "Macy's 5th Floor - Herald Square - New York NY  (W)"
+      ],
+      [
+        "80th & York - New York NY  (W)"
+      ],
+      [
+        "Columbus @ 67th - New York NY  (W)"
+      ],
+      [
+        "45th & Broadway - New York NY  (W)"
+      ],
+      [
+        "Marriott Marquis - Lobby - New York NY"
+      ],
+      [
+        "Second @ 81st - New York NY  (W)"
+      ],
+      [
+        "52nd & Seventh - New York NY  (W)"
+      ]
+    ]
+  ]
+]
+~~~
+
+Of course you can specify conditions via the `query` option:
+
+~~~
+$ curl "$endpoint/d/select?table=Store&query=Columbus&match_columns=name&output_columns=name&limit=10" | jq "."
+[
+  [
+    0,
+    1398670157.661574,
+    0.0012705326080322266
+  ],
+  [
+    [
+      [
+        2
+      ],
+      [
+        [
+          "_key",
+          "ShortText"
+        ]
+      ],
+      [
+        "Columbus @ 67th - New York NY  (W)"
+      ],
+      [
+        "2 Columbus Ave. - New York NY  (W)"
+      ]
+    ]
+  ]
+]
+$ curl "$endpoint/d/select?table=Store&filter=name@'Ave'&output_columns=name&limit=10" | jq "."
+[
+  [
+    0,
+    1398670586.193325,
+    0.0003848075866699219
+  ],
+  [
+    [
+      [
+        3
+      ],
+      [
+        [
+          "_key",
+          "ShortText"
+        ]
+      ],
+      [
+        "2nd Ave. & 9th Street - New York NY"
+      ],
+      [
+        "84th & Third Ave - New York NY  (W)"
+      ],
+      [
+        "2 Columbus Ave. - New York NY  (W)"
+      ]
+    ]
+  ]
+]
+~~~
+
+## Conclusion
+
+In this tutorial, you did set up a [Droonga][] cluster on [Ubuntu Linux][Ubuntu] or [CentOS][] computers.
+Moreover, you load data to it and select data from it successfully, as a [Groonga][] compatible server.
+
+Currently, Droonga supports only some limited features of Groonga compatible commands.
+See the [command reference][] for more details.
+
+Next, let's learn [how to backup and restore contents of a Droonga cluster](../dump-restore/).
+
+  [Ubuntu]: http://www.ubuntu.com/
+  [CentOS]: https://www.centos.org/
+  [Droonga]: https://droonga.org/
+  [Groonga]: http://groonga.org/
+  [command reference]: ../../reference/commands/

  Added: tutorial/1.1.0/index.md (+22 -0) 100644
===================================================================
--- /dev/null
+++ tutorial/1.1.0/index.md    2014-11-30 23:20:40 +0900 (448c755)
@@ -0,0 +1,22 @@
+---
+title: Droonga tutorial
+layout: en
+---
+
+## For beginners and Groonga users
+
+ * [Getting started/How to migrate from Groonga?](groonga/)
+   * [How to prepare virtual machines for experiments?](virtual-machines-for-experiments/)
+ * [How to backup and restore the database?](dump-restore/)
+ * [How to add a new replica to an existing cluster?](add-replica/)
+ * [How to benchmark Droonga with Groonga?](benchmark/)
+
+## For low-layer application developers
+
+ * [Basic usage of low-layer commands](basic/)
+
+## For plugin developers
+
+ * [Plugin development tutorial](plugin-development/)
+
+

  Added: tutorial/1.1.0/plugin-development/adapter/index.md (+695 -0) 100644
===================================================================
--- /dev/null
+++ tutorial/1.1.0/plugin-development/adapter/index.md    2014-11-30 23:20:40 +0900 (3f5f9bc)
@@ -0,0 +1,695 @@
+---
+title: "Plugin: Adapt requests and responses, to add a new command based on other existing commands"
+layout: en
+---
+
+* TOC
+{:toc}
+
+## The goal of this tutorial
+
+Learning steps to develop a Droonga plugin by yourself.
+
+This page focuses on the "adaption" by Droonga plugins.
+At the last, we create a new command `storeSearch` based on the existing `search` command, with a small practical plugin.
+
+## Precondition
+
+* You must complete the [basic tutorial][].
+
+
+## Adaption for incoming messages
+
+First, let's study basics with a simple logger plugin named `sample-logger` affects at the adaption phase.
+
+We sometime need to modify incoming requests from outside to Droonga Engine.
+We can use a plugin for this purpose.
+
+Let's see how to create a plugin for the *pre adaption phase*, in this section.
+
+### Directory Structure
+
+Assume that we are going to add a new plugin to the system built in the [basic tutorial][].
+In that tutorial, Droonga engine was placed under `engine` directory.
+
+Plugins need to be placed in an appropriate directory. Let's create the directory:
+
+~~~
+# cd engine
+# mkdir -p lib/droonga/plugins
+~~~
+
+After creating the directory, the directory structure should be like this:
+
+~~~
+engine
+├── catalog.json
+├── fluentd.conf
+└── lib
+    └── droonga
+        └── plugins
+~~~
+
+
+### Create a plugin
+
+You must put codes for a plugin into a file which has the name *same to the plugin itself*.
+Because the plugin now you creating is `sample-logger`, put codes into a file `sample-logger.rb` in the `droonga/plugins` directory.
+
+lib/droonga/plugins/sample-logger.rb:
+
+~~~ruby
+require "droonga/plugin"
+
+module Droonga
+  module Plugins
+    module SampleLoggerPlugin
+      extend Plugin
+      register("sample-logger")
+
+      class Adapter < Droonga::Adapter
+        # You'll put codes to modify messages here.
+      end
+    end
+  end
+end
+~~~
+
+This plugin does nothing except registering itself to the Droonga Engine.
+
+ * The `sample-logger` is the name of the plugin itself. You'll use it in your `catalog.json`, to activate the plugin.
+ * As the example above, you must define your plugin as a module.
+ * Behaviors at the pre adaption phase is defined a class called *adapter*.
+   An adapter class must be defined as a subclass of the `Droonga::Adapter`, under the namespace of the plugin module.
+
+
+### Activate the plugin with `catalog.json`
+
+You need to update `catalog.json` to activate your plugin.
+Insert the name of the plugin `"sample-logger"` to the `"plugins"` list under the dataset, like:
+
+catalog.json:
+
+~~~
+(snip)
+      "datasets": {
+        "Starbucks": {
+          (snip)
+          "plugins": ["sample-logger", "groonga", "crud", "search", "dump", "status"],
+(snip)
+~~~
+
+Note: you must place `"sample-logger"` before `"search"`, because the `sample-logger` plugin depends on the `search`. Droonga Engine applies plugins at the pre adaption phase in the order defined in the `catalog.json`, so you must resolve plugin dependencies by your hand (for now).
+
+### Run and test
+
+Let's get Droonga started.
+Note that you need to specify `./lib` directory in `RUBYLIB` environment variable in order to make ruby possible to find your plugin.
+
+~~~
+# kill $(cat fluentd.pid)
+# RUBYLIB=./lib fluentd --config fluentd.conf --log fluentd.log --daemon fluentd.pid
+~~~
+
+Then, verify that the engine is correctly working.
+First, create a request as a JSON.
+
+search-columbus.json:
+
+~~~json
+{
+  "dataset" : "Starbucks",
+  "type"    : "search",
+  "body"    : {
+    "queries" : {
+      "stores" : {
+        "source"    : "Store",
+        "condition" : {
+          "query"   : "Columbus",
+          "matchTo" : "_key"
+        },
+        "output" : {
+          "elements"   : [
+            "startTime",
+            "elapsedTime",
+            "count",
+            "attributes",
+            "records"
+          ],
+          "attributes" : ["_key"],
+          "limit"      : -1
+        }
+      }
+    }
+  }
+}
+~~~
+
+This is corresponding to the example to search "Columbus" in the [basic tutorial][].
+Note that the request for the Protocol Adapter is encapsulated in `"body"` element.
+
+Send the request to engine with `droonga-request`:
+
+~~~
+# droonga-request --tag starbucks search-columbus.json
+Elapsed time: 0.021544
+[
+  "droonga.message",
+  1392617533,
+  {
+    "inReplyTo": "1392617533.9644868",
+    "statusCode": 200,
+    "type": "search.result",
+    "body": {
+      "stores": {
+        "count": 2,
+        "records": [
+          [
+            "Columbus @ 67th - New York NY  (W)"
+          ],
+          [
+            "2 Columbus Ave. - New York NY  (W)"
+          ]
+        ]
+      }
+    }
+  }
+]
+~~~
+
+This is the search result.
+
+
+### Do something in the plugin: take logs
+
+The plugin we have created do nothing so far. Let's get the plugin to do some interesting.
+
+First of all, trap `search` request and log it. Update the plugin like below:
+
+lib/droonga/plugins/sample-logger.rb:
+
+~~~ruby
+(snip)
+    module SampleLoggerPlugin
+      extend Plugin
+      register("sample-logger")
+
+      class Adapter < Droonga::Adapter
+        input_message.pattern = ["type", :equal, "search"]
+
+        def adapt_input(input_message)
+          logger.info("SampleLoggerPlugin::Adapter", :message => input_message)
+        end
+      end
+    end
+(snip)
+~~~
+
+The line beginning with `input_message.pattern` is a configuration.
+This example defines a plugin for any incoming message with `"type":"search"`.
+See the [reference manual's configuration section](../../../reference/plugin/adapter/#config)
+
+The method `adapt_input` is called for every incoming message matching to the pattern.
+The argument `input_message` is a wrapped version of the incoming message.
+
+Restart fluentd:
+
+~~~
+# kill $(cat fluentd.pid)
+# RUBYLIB=./lib fluentd --config fluentd.conf --log fluentd.log --daemon fluentd.pid
+~~~
+
+Send the request same as the previous section:
+
+~~~
+# droonga-request --tag starbucks search-columbus.json
+Elapsed time: 0.014714
+[
+  "droonga.message",
+  1392618037,
+  {
+    "inReplyTo": "1392618037.935901",
+    "statusCode": 200,
+    "type": "search.result",
+    "body": {
+      "stores": {
+        "count": 2,
+        "records": [
+          [
+            "Columbus @ 67th - New York NY  (W)"
+          ],
+          [
+            "2 Columbus Ave. - New York NY  (W)"
+          ]
+        ]
+      }
+    }
+  }
+]
+~~~
+
+You will see something like below fluentd's log in `fluentd.log`:
+
+~~~
+2014-02-17 15:20:37 +0900 [info]: SampleLoggerPlugin::Adapter message=#<Droonga::InputMessage:0x007f8ae3e1dd98 @raw_message={"dataset"=>"Starbucks", "type"=>"search", "body"=>{"queries"=>{"stores"=>{"source"=>"Store", "condition"=>{"query"=>"Columbus", "matchTo"=>"_key"}, "output"=>{"elements"=>["startTime", "elapsedTime", "count", "attributes", "records"], "attributes"=>["_key"], "limit"=>-1}}}}, "replyTo"=>{"type"=>"search.result", "to"=>"127.0.0.1:64591/droonga"}, "id"=>"1392618037.935901", "date"=>"2014-02-17 15:20:37 +0900", "appliedAdapters"=>[]}>
+~~~
+
+This shows the message is received by our `SampleLoggerPlugin::Adapter` and then passed to Droonga. Here we can modify the message before the actual data processing.
+
+### Modify messages with the plugin
+
+Suppose that we want to restrict the number of records returned in the response, say `1`.
+What we need to do is set `limit` to be `1` for every request.
+Update plugin like below:
+
+lib/droonga/plugins/sample-logger.rb:
+
+~~~ruby
+(snip)
+        def adapt_input(input_message)
+          logger.info("SampleLoggerPlugin::Adapter", :message => input_message)
+          input_message.body["queries"]["stores"]["output"]["limit"] = 1
+        end
+(snip)
+~~~
+
+Like above, you can modify the incoming message via methods of the argument `input_message`.
+See the [reference manual for the message class](../../../reference/plugin/adapter/#classes-Droonga-InputMessage).
+
+Restart fluentd:
+
+~~~
+# kill $(cat fluentd.pid)
+# RUBYLIB=./lib fluentd --config fluentd.conf --log fluentd.log --daemon fluentd.pid
+~~~
+
+After restart, the response always includes only one record in `records` section.
+
+Send the request same as the previous:
+
+~~~
+# droonga-request --tag starbucks search-columbus.json
+Elapsed time: 0.017343
+[
+  "droonga.message",
+  1392618279,
+  {
+    "inReplyTo": "1392618279.0578449",
+    "statusCode": 200,
+    "type": "search.result",
+    "body": {
+      "stores": {
+        "count": 2,
+        "records": [
+          [
+            "Columbus @ 67th - New York NY  (W)"
+          ]
+        ]
+      }
+    }
+  }
+]
+~~~
+
+Note that `count` is still `2` because `limit` does not affect to `count`. See [search][] for details of the `search` command.
+
+You will see something like below fluentd's log in `fluentd.log`:
+
+~~~
+2014-02-17 15:24:39 +0900 [info]: SampleLoggerPlugin::Adapter message=#<Droonga::InputMessage:0x007f956685c908 @raw_message={"dataset"=>"Starbucks", "type"=>"search", "body"=>{"queries"=>{"stores"=>{"source"=>"Store", "condition"=>{"query"=>"Columbus", "matchTo"=>"_key"}, "output"=>{"elements"=>["startTime", "elapsedTime", "count", "attributes", "records"], "attributes"=>["_key"], "limit"=>-1}}}}, "replyTo"=>{"type"=>"search.result", "to"=>"127.0.0.1:64616/droonga"}, "id"=>"1392618279.0578449", "date"=>"2014-02-17 15:24:39 +0900", "appliedAdapters"=>[]}>
+~~~
+
+
+## Adaption for outgoing messages
+
+In case we need to modify outgoing messages from Droonga Engine, for example, search results, then we can do it simply by another method.
+In this section, we are going to define a method to adapt outgoing messages.
+
+
+### Add a method to adapt outgoing messages
+
+Let's take logs of results of `search` command.
+Define the `adapt_output` method to process outgoing messages.
+Remove `adapt_input` at this moment for the simplicity.
+
+lib/droonga/plugins/sample-logger.rb:
+
+~~~ruby
+(snip)
+    module SampleLoggerPlugin
+      extend Plugin
+      register("sample-logger")
+
+      class Adapter < Droonga::Adapter
+        input_message.pattern = ["type", :equal, "search"]
+
+        def adapt_output(output_message)
+          logger.info("SampleLoggerPlugin::Adapter", :message => output_message)
+        end
+      end
+    end
+(snip)
+~~~
+
+The method `adapt_output` is called only for outgoing messages triggered by incoming messages trapped by the plugin itself, even if there is only the matching pattern and the `adapt_input` method is not defined.
+See the [reference manual for plugin developers](../../../reference/plugin/adapter/) for more details.
+
+### Run
+
+Let's restart fluentd:
+
+~~~
+# kill $(cat fluentd.pid)
+# RUBYLIB=./lib fluentd --config fluentd.conf --log fluentd.log --daemon fluentd.pid
+~~~
+
+And send search request (Use the same JSON for request as in the previous section):
+
+~~~
+# droonga-request --tag starbucks search-columbus.json
+Elapsed time: 0.015491
+[
+  "droonga.message",
+  1392619269,
+  {
+    "inReplyTo": "1392619269.184789",
+    "statusCode": 200,
+    "type": "search.result",
+    "body": {
+      "stores": {
+        "count": 2,
+        "records": [
+          [
+            "Columbus @ 67th - New York NY  (W)"
+          ],
+          [
+            "2 Columbus Ave. - New York NY  (W)"
+          ]
+        ]
+      }
+    }
+  }
+]
+~~~
+
+The fluentd's log should be like as follows:
+
+~~~
+2014-02-17 15:41:09 +0900 [info]: SampleLoggerPlugin::Adapter message=#<Droonga::OutputMessage:0x007fddcad4d5a0 @raw_message={"dataset"=>"Starbucks", "type"=>"dispatcher", "body"=>{"stores"=>{"count"=>2, "records"=>[["Columbus @ 67th - New York NY  (W)"], ["2 Columbus Ave. - New York NY  (W)"]]}}, "replyTo"=>{"type"=>"search.result", "to"=>"127.0.0.1:64724/droonga"}, "id"=>"1392619269.184789", "date"=>"2014-02-17 15:41:09 +0900", "appliedAdapters"=>["Droonga::Plugins::SampleLoggerPlugin::Adapter", "Droonga::Plugins::Error::Adapter"]}>
+~~~
+
+This shows that the result of `search` is passed to the `adapt_output` method (and logged), then outputted.
+
+
+### Modify results in the adaption phase
+
+Let's modify the result at the *post adaption phase*.
+For example, add `completedAt` attribute that shows the time completed the request.
+Update your plugin as follows:
+
+lib/droonga/plugins/sample-logger.rb:
+
+~~~ruby
+(snip)
+        def adapt_output(output_message)
+          logger.info("SampleLoggerPlugin::Adapter", :message => output_message)
+          output_message.body["stores"]["completedAt"] = Time.now
+        end
+(snip)
+~~~
+
+Like above, you can modify the outgoing message via methods of the argument `output_message`. 
+See the [reference manual for the message class](../../../reference/plugin/adapter/#classes-Droonga-OutputMessage).
+
+Restart fluentd:
+
+~~~
+# kill $(cat fluentd.pid)
+# RUBYLIB=./lib fluentd --config fluentd.conf --log fluentd.log --daemon fluentd.pid
+~~~
+
+Send the same search request:
+
+~~~
+# droonga-request --tag starbucks search-columbus.json
+Elapsed time: 0.013983
+[
+  "droonga.message",
+  1392619528,
+  {
+    "inReplyTo": "1392619528.235121",
+    "statusCode": 200,
+    "type": "search.result",
+    "body": {
+      "stores": {
+        "count": 2,
+        "records": [
+          [
+            "Columbus @ 67th - New York NY  (W)"
+          ],
+          [
+            "2 Columbus Ave. - New York NY  (W)"
+          ]
+        ],
+        "completedAt": "2014-02-17T06:45:28.247669Z"
+      }
+    }
+  }
+]
+~~~
+
+Now you can see `completedAt` attribute containing the time completed the request.
+The results in `fluentd.log` will be like this:
+
+~~~
+2014-02-17 15:45:28 +0900 [info]: SampleLoggerPlugin::Adapter message=#<Droonga::OutputMessage:0x007fd384f3ab60 @raw_message={"dataset"=>"Starbucks", "type"=>"dispatcher", "body"=>{"stores"=>{"count"=>2, "records"=>[["Columbus @ 67th - New York NY  (W)"], ["2 Columbus Ave. - New York NY  (W)"]]}}, "replyTo"=>{"type"=>"search.result", "to"=>"127.0.0.1:64849/droonga"}, "id"=>"1392619528.235121", "date"=>"2014-02-17 15:45:28 +0900", "appliedAdapters"=>["Droonga::Plugins::SampleLoggerPlugin::Adapter", "Droonga::Plugins::Error::Adapter"]}>
+~~~
+
+
+## Adaption for both incoming and outgoing messages
+
+We have learned the basics of plugins for the pre adaption phase and the post adaption phase so far.
+Let's try to build more practical plugin.
+
+You may feel the Droonga's `search` command is too flexible for your purpose.
+Here, we're going to add our own `storeSearch` command to wrap the `search` command in order to provide an application-specific and simple interface, with a new plugin named `store-search`.
+
+### Accepting of simple requests
+
+First, create the `store-search` plugin.
+Remember, you must put codes into a file which has the name same to the plugin now you are creating.
+So, the file is `store-search.rb` in the `droonga/plugins` directory. Then define your `StoreSearchPlugin` as follows:
+
+lib/droonga/plugins/store-search.rb:
+
+~~~ruby
+require "droonga/plugin"
+
+module Droonga
+  module Plugins
+    module StoreSearchPlugin
+      extend Plugin
+      register("store-search")
+
+      class Adapter < Droonga::Adapter
+        input_message.pattern = ["type", :equal, "storeSearch"]
+
+        def adapt_input(input_message)
+          logger.info("StoreSearchPlugin::Adapter", :message => input_message)
+
+          query = input_message.body["query"]
+          logger.info("storeSearch", :query => query)
+
+          body = {
+            "queries" => {
+              "stores" => {
+                "source"    => "Store",
+                "condition" => {
+                  "query"   => query,
+                  "matchTo" => "_key",
+                },
+                "output"    => {
+                  "elements"   => [
+                    "startTime",
+                    "elapsedTime",
+                    "count",
+                    "attributes",
+                    "records",
+                  ],
+                  "attributes" => [
+                    "_key",
+                  ],
+                  "limit"      => -1,
+                }
+              }
+            }
+          }
+
+          input_message.type = "search"
+          input_message.body = body
+        end
+      end
+    end
+  end
+end
+~~~
+
+Then update your `catalog.json` to activate the plugin.
+Remove the `sample-logger` plugin previously created.
+
+catalog.json:
+
+~~~
+(snip)
+      "datasets": {
+        "Starbucks": {
+          (snip)
+          "plugins": ["store-search", "groonga", "crud", "search", "dump", "status"],
+(snip)
+~~~
+
+Remember, you must place your plugin `"store-search"` before the `"search"` because yours depends on it.
+
+Restart fluentd:
+
+~~~
+# kill $(cat fluentd.pid)
+# RUBYLIB=./lib fluentd --config fluentd.conf --log fluentd.log --daemon fluentd.pid
+~~~
+
+Now you can use this new command by the following request:
+
+store-search-columbus.json:
+
+~~~json
+{
+  "dataset" : "Starbucks",
+  "type"    : "storeSearch",
+  "body"    : {
+    "query" : "Columbus"
+  }
+}
+~~~
+
+In order to issue this request, you need to run:
+
+~~~
+# droonga-request --tag starbucks store-search-columbus.json
+Elapsed time: 0.01494
+[
+  "droonga.message",
+  1392621168,
+  {
+    "inReplyTo": "1392621168.0119512",
+    "statusCode": 200,
+    "type": "storeSearch.result",
+    "body": {
+      "stores": {
+        "count": 2,
+        "records": [
+          [
+            "Columbus @ 67th - New York NY  (W)"
+          ],
+          [
+            "2 Columbus Ave. - New York NY  (W)"
+          ]
+        ]
+      }
+    }
+  }
+]
+~~~
+
+And you will see the result on fluentd's log in `fluentd.log`:
+
+~~~
+2014-02-17 16:12:48 +0900 [info]: StoreSearchPlugin::Adapter message=#<Droonga::InputMessage:0x007fe4791d3958 @raw_message={"dataset"=>"Starbucks", "type"=>"storeSearch", "body"=>{"query"=>"Columbus"}, "replyTo"=>{"type"=>"storeSearch.result", "to"=>"127.0.0.1:49934/droonga"}, "id"=>"1392621168.0119512", "date"=>"2014-02-17 16:12:48 +0900", "appliedAdapters"=>[]}>
+2014-02-17 16:12:48 +0900 [info]: storeSearch query="Columbus"
+~~~
+
+Now we can perform store search with simple requests.
+
+Note: look at the `"type"` of the response message. Now it became `"storeSearch.result"`, from `"search.result"`. Because it is triggered from the incoming message with the type `"storeSearch"`, the outgoing message has the type `"(incoming command).result"` automatically. In other words, you don't have to change the type of the outgoing messages, like `input_message.type = "search"` in the method `adapt_input`.
+
+### Returning of simple responses
+
+Second, let's return results in more simple way: just an array of the names of stores.
+
+Define the `adapt_output` method as follows.
+
+lib/droonga/plugins/store-search.rb:
+
+~~~ruby
+(snip)
+    module StoreSearchPlugin
+      extend Plugin
+      register("store-search")
+
+      class Adapter < Droonga::Adapter
+        (snip)
+
+        def adapt_output(output_message)
+          logger.info("StoreSearchPlugin::Adapter", :message => output_message)
+
+          records = output_message.body["stores"]["records"]
+          simplified_results = records.flatten
+
+          output_message.body = simplified_results
+        end
+      end
+    end
+(snip)
+~~~
+
+The `adapt_output` method receives outgoing messages only corresponding to the incoming messages trapped by the plugin.
+
+Restart fluentd:
+
+~~~
+# kill $(cat fluentd.pid)
+# RUBYLIB=./lib fluentd --config fluentd.conf --log fluentd.log --daemon fluentd.pid
+~~~
+
+Send the request:
+
+~~~
+# droonga-request --tag starbucks store-search-columbus.json
+Elapsed time: 0.014859
+[
+  "droonga.message",
+  1392621288,
+  {
+    "inReplyTo": "1392621288.158763",
+    "statusCode": 200,
+    "type": "storeSearch.result",
+    "body": [
+      "Columbus @ 67th - New York NY  (W)",
+      "2 Columbus Ave. - New York NY  (W)"
+    ]
+  }
+]
+~~~
+
+The log in `fluentd.log` will be like this:
+
+~~~
+2014-02-17 16:14:48 +0900 [info]: StoreSearchPlugin::Adapter message=#<Droonga::InputMessage:0x007ffb8ada9d68 @raw_message={"dataset"=>"Starbucks", "type"=>"storeSearch", "body"=>{"query"=>"Columbus"}, "replyTo"=>{"type"=>"storeSearch.result", "to"=>"127.0.0.1:49960/droonga"}, "id"=>"1392621288.158763", "date"=>"2014-02-17 16:14:48 +0900", "appliedAdapters"=>[]}>
+2014-02-17 16:14:48 +0900 [info]: storeSearch query="Columbus"
+2014-02-17 16:14:48 +0900 [info]: StoreSearchPlugin::Adapter message=#<Droonga::OutputMessage:0x007ffb8ad78e48 @raw_message={"dataset"=>"Starbucks", "type"=>"dispatcher", "body"=>{"stores"=>{"count"=>2, "records"=>[["Columbus @ 67th - New York NY  (W)"], ["2 Columbus Ave. - New York NY  (W)"]]}}, "replyTo"=>{"type"=>"storeSearch.result", "to"=>"127.0.0.1:49960/droonga"}, "id"=>"1392621288.158763", "date"=>"2014-02-17 16:14:48 +0900", "appliedAdapters"=>["Droonga::Plugins::StoreSearchPlugin::Adapter", "Droonga::Plugins::Error::Adapter"], "originalTypes"=>["storeSearch"]}>
+~~~
+
+Now you've got the simplified response.
+
+In the way just described, we can use adapter to implement the application specific search logic.
+
+## Conclusion
+
+We have learned how to add a new command based only on a custom adapter and an existing command.
+In the process, we also have learned how to receive and modify messages, both of incoming and outgoing.
+
+See also the [reference manual](../../../reference/plugin/adapter/) for more details.
+
+
+  [basic tutorial]: ../../basic/
+  [overview]: ../../../overview/
+  [search]: ../../../reference/commands/select/

  Added: tutorial/1.1.0/plugin-development/handler/index.md (+533 -0) 100644
===================================================================
--- /dev/null
+++ tutorial/1.1.0/plugin-development/handler/index.md    2014-11-30 23:20:40 +0900 (b511704)
@@ -0,0 +1,533 @@
+---
+title: "Plugin: Handle requests on all volumes, to add a new command working around the storage"
+layout: en
+---
+
+* TOC
+{:toc}
+
+## The goal of this tutorial
+
+This tutorial aims to help you to learn how to develop plugins which do something dispersively for/in each volume, around the handling phase.
+In other words, this tutorial describes *how to add a new simple command to the Droonga Engine*.
+
+## Precondition
+
+* You must complete the [tutorial for the adaption phase][adapter].
+
+## Handling of requests
+
+When a request is transferred from the adaption phase, the Droonga Engine enters into the *processing phase*.
+
+In the processing phase, the Droonga Engine processes the request step by step.
+One *step* is constructed from some sub phases: *planning phase*, *distribution phase*, *handling phase*, and *collection phase*.
+
+ * At the *planning phase*, the Droonga Engine generates multiple sub steps to process the request.
+   In simple cases, you don't have to write codes for this phase, then there is just one sub step to handle the request.
+ * At the *distribution phase*, the Droonga Engine distributes task messages for the request, to multiple volumes.
+   (It is completely done by the Droonga Engine itself, so this phase is not pluggable.)
+ * At the *handling phase*, *each single volume simply processes only one distributed task message as its input, and returns a result.*
+   This is the time that actual storage accesses happen.
+   Actually, some commands (`search`, `add`, `create_table` and so on) access to the storage at the time.
+ * At the *collection phase*, the Droonga Engine collects results from volumes to one unified result.
+   There are some useful generic collectors, so you don't have to write codes for this phase in most cases.
+
+After all steps are finished, the Droonga Engine transfers the result to the post adaption phase.
+
+A class to define operations at the handling phase is called *handler*.
+Put simply, adding of a new handler means adding a new command.
+
+
+
+
+
+
+## Design a read-only command `countRecords`
+
+Here, in this tutorial, we are going to add a new custom `countRecords` command.
+At first, let's design it.
+
+The command reports the number of records about a specified table, for each single volume.
+So it will help you to know how records are distributed in the cluster.
... truncated to 1.0MB




More information about the Groonga-commit mailing list
Back to archive index