heap memory - Elasticsearch getting stopped and starting again and again -


i running elasticsearch on 4gb instance , having 2gb heap size. elasticsearch runs fine after requests gets stopped , starts on own following logs.

[2016-07-11 11:11:04,492][info ][discovery                ] [ev teel urizen] elasticsearch/9amxixxmttyv_l_s3t6gnw [2016-07-11 11:11:07,568][info ][cluster.service          ] [ev teel urizen] new_master {ev teel urizen}{9amxixxmttyv_l_s3t6gnw}{p.p.p.p}{p.p.p.p:$300}, reason: zen-disco-join(elected_as_master, [0] joins received) [2016-07-11 11:11:07,611][info ][http                     ] [ev teel urizen] publish_address {p.p.p.p:9200}, bound_addresses {p.p.p.p:9200} [2016-07-11 11:11:07,611][info ][node                     ] [ev teel urizen] started [2016-07-11 11:11:07,641][info ][gateway                  ] [ev teel urizen] recovered [1] indices cluster_state [2016-07-11 11:11:07,912][info ][cluster.routing.allocation] [ev teel urizen] cluster health status changed [red] [green] (reason: [shards start$d [[user-details-index][0]] ...]). [2016-07-11 11:13:01,482][info ][node                     ] [ev teel urizen] stopping ... [2016-07-11 11:13:01,503][info ][node                     ] [ev teel urizen] stopped [2016-07-11 11:13:01,503][info ][node                     ] [ev teel urizen] closing ... [2016-07-11 11:13:01,507][info ][node                     ] [ev teel urizen] closed 

where p.p.p.p private ip of instance.

edit: logs after changing log-level debug

[2016-07-11 12:46:34,129][debug][index.shard              ] [cameron hodge] [user-details-index][0] recovery completed [shard_store], took [106ms] [2016-07-11 12:46:34,129][debug][cluster.action.shard     ] [cameron hodge] [user-details-index][0] sending shard started target shard [[user-details-index][0], node[tueznaaxqsqafq6idbvuvw], [p], v[149], s[initializing], a[id=j_dlcimrsoi7kff-9ah9xw], unassigned_info[[reason=cluster_recovered], at[2016-07-11t12:46:33.774z]]], indexuuid [tstidr3prwakikcihk5h0a], message [after recovery store] [2016-07-11 12:46:34,129][debug][cluster.action.shard     ] [cameron hodge] received shard started target shard [[user-details-index][0], node[tueznaaxqsqafq6idbvuvw], [p], v[149], s[initializing], a[id=j_dlcimrsoi7kff-9ah9xw], unassigned_info[[reason=cluster_recovered], at[2016-07-11t12:46:33.774z]]], indexuuid [tstidr3prwakikcihk5h0a], message [after recovery store] [2016-07-11 12:46:34,130][debug][cluster.service          ] [cameron hodge] processing [shard-started ([user-details-index][0], node[tueznaaxqsqafq6idbvuvw], [p], v[149], s[initializing], a[id=j_dlcimrsoi7kff-9ah9xw], unassigned_info[[reason=cluster_recovered], at[2016-07-11t12:46:33.774z]]), reason [after recovery store]]: execute [2016-07-11 12:46:34,131][info ][cluster.routing.allocation] [cameron hodge] cluster health status changed [red] [green] (reason: [shards started [[user-details-index][0]] ...]). [2016-07-11 12:46:34,131][debug][cluster.service          ] [cameron hodge] cluster state updated, version [4], source [shard-started ([user-details-index][0], node[tueznaaxqsqafq6idbvuvw], [p], v[149], s[initializing], a[id=j_dlcimrsoi7kff-9ah9xw], unassigned_info[[reason=cluster_recovered], at[2016-07-11t12:46:33.774z]]), reason [after recovery store]] [2016-07-11 12:46:34,131][debug][cluster.service          ] [cameron hodge] publishing cluster state version [4] [2016-07-11 12:46:34,135][debug][cluster.service          ] [cameron hodge] set local cluster state version 4 [2016-07-11 12:46:34,135][debug][index.shard              ] [cameron hodge] [user-details-index][0] state: [post_recovery]->[started], reason [global state [started]] [2016-07-11 12:46:34,153][debug][cluster.service          ] [cameron hodge] processing [shard-started ([user-details-index][0], node[tueznaaxqsqafq6idbvuvw], [p], v[149], s[initializing], a[id=j_dlcimrsoi7kff-9ah9xw], unassigned_info[[reason=cluster_recovered], at[2016-07-11t12:46:33.774z]]), reason [after recovery store]]: took 23ms done applying updated cluster_state (version: 4, uuid: g_foveanrfoff0lhjz0h2a) [2016-07-11 12:47:00,583][debug][indices.memory           ] [cameron hodge] recalculating shard indexing buffer, total [203.1mb] [1] active shards, each shard set indexing=[203.1mb], translog=[64kb] [2016-07-11 12:47:01,648][info ][node                     ] [cameron hodge] stopping ... [2016-07-11 12:47:01,660][debug][indices                  ] [cameron hodge] [user-details-index] closing ... (reason [shutdown]) [2016-07-11 12:47:01,661][debug][indices                  ] [cameron hodge] [user-details-index] closing index service (reason [shutdown]) [2016-07-11 12:47:01,662][debug][index                    ] [cameron hodge] [user-details-index] [0] closing... (reason: [shutdown]) [2016-07-11 12:47:01,664][debug][index.shard              ] [cameron hodge] [user-details-index][0] state: [started]->[closed], reason [shutdown] [2016-07-11 12:47:01,664][debug][index.shard              ] [cameron hodge] [user-details-index][0] operations counter reached 0, not accept further writes [2016-07-11 12:47:01,664][debug][index.engine             ] [cameron hodge] [user-details-index][0] flushing shard on close - might take time sync files disk [2016-07-11 12:47:01,666][debug][index.engine             ] [cameron hodge] [user-details-index][0] close acquiring writelock [2016-07-11 12:47:01,666][debug][index.engine             ] [cameron hodge] [user-details-index][0] close acquired writelock [2016-07-11 12:47:01,668][debug][index.translog           ] [cameron hodge] [user-details-index][0] translog closed [2016-07-11 12:47:01,672][debug][index.engine             ] [cameron hodge] [user-details-index][0] engine closed [api] [2016-07-11 12:47:01,672][debug][index.store              ] [cameron hodge] [user-details-index][0] store reference count on close: 0 [2016-07-11 12:47:01,672][debug][index                    ] [cameron hodge] [user-details-index] [0] closed (reason: [shutdown]) [2016-07-11 12:47:01,672][debug][indices                  ] [cameron hodge] [user-details-index] closing index cache (reason [shutdown]) [2016-07-11 12:47:01,672][debug][index.cache.query.index  ] [cameron hodge] [user-details-index] full cache clear, reason [close] [2016-07-11 12:47:01,673][debug][index.cache.bitset       ] [cameron hodge] [user-details-index] clearing bitsets because [close] [2016-07-11 12:47:01,673][debug][indices                  ] [cameron hodge] [user-details-index] clearing index field data (reason [shutdown]) [2016-07-11 12:47:01,674][debug][indices                  ] [cameron hodge] [user-details-index] closing analysis service (reason [shutdown]) [2016-07-11 12:47:01,674][debug][indices                  ] [cameron hodge] [user-details-index] closing mapper service (reason [shutdown]) [2016-07-11 12:47:01,674][debug][indices                  ] [cameron hodge] [user-details-index] closing index query parser service (reason [shutdown]) [2016-07-11 12:47:01,680][debug][indices                  ] [cameron hodge] [user-details-index] closing index service (reason [shutdown]) [2016-07-11 12:47:01,680][debug][indices                  ] [cameron hodge] [user-details-index] closed... (reason [shutdown]) [2016-07-11 12:47:01,680][info ][node                     ] [cameron hodge] stopped [2016-07-11 12:47:01,680][info ][node                     ] [cameron hodge] closing ... [2016-07-11 12:47:01,685][info ][node                     ] [cameron hodge] closed 


Comments

Popular posts from this blog

sql - invalid in the select list because it is not contained in either an aggregate function -

Angularjs unit testing - ng-disabled not working when adding text to textarea -

How to start daemon on android by adb -