Devops tooling



Graphite and Grafana in Docker
 docker-compose up
 docker-compose down

Graphite + Carbon-cache

  • 8888 the graphite web interface admin/admin
  • 2003 the carbon-cache line receiver (the standard graphite protocol)
  • 2004 the carbon-cache pickle receiver
  • 7002 the carbon-cache query port (used by the web interface)


  • 3000 Kibana4 admin/admin
  • add new data sources

Send some fake stats to Graphite

while true
  echo "local.random.diceroll $(((RANDOM % 10) + 1)) `date +%s`" | nc -c localhost 2003

Create new dashboard with graph from new datasource

  • local.random.diceroll

ElasticSearch v5 + Kibana


Repozitory structure overview

  • set of README files:
    • README – this document
    • README_api_usage – basic handling of ELK via curl
    • README_mapping – ELK mapping example
    • README_query_cheatsheet – examples of query language
    • README_watcher – how to setup simple alerting
    • README_workshop – labs + basic terms
    • README_geoip – info about GeoIP coordinates handling
    • README_s3 – experimental backup to S3 bucket simulated via Riak CS
  • Terraform – just for workshop, spin several servers with preinstalled docker, readme in directory

ELK Stack components


  • 1x Fluentd NEED TO BE RUN FIRST see how to run this stack
    • containers output logger (back to ELK)
    • index platform*


  • 3x server (data/client/master role) – you can start just one server (elasticsearchdataone) if you don't have HW resources or limit resources via docker CPU/Mem qutoas, see comments in common-services.yml
  • x-pack installed
  • exposed ports:
    • 920[1-3] / 930[1-3]



  • used for easy sample data upload
    • exposed ports:
    • 5000 – json filter
    • 5001 – raw, no filters
    • you can use .raw field for not_analyzed data
    • index: logstash*

Riak CS

  • used for AWS S3 simulation
    • exposed ports:
    • 8080 – API
  • not logged via Fluentd to ELK (API key created during start)


  • Mattermost server – running outside demo stack as a simple container with open access and webhook created
  • IP of server is setup in .env

Stack handling

Start stack

  • Start stack (do not use docker-compose up as there are some prerequisities to start stack)
  • This short script will prepare temporary data volumes for ES servers and start fluend container first
  • Download git repo:
$ git clone 
$ cd elk-stack-v5-xpack
$ ./_start

Stop stack

  • just stop containers, for removing network/artefact use docker-copose command
$ ./_stop

Used tools

You can use logstash or kibana containers (all is mounted as /code/) or install on your system (OSX/Lnx).

  • netcat – for logstash feedings
  • jq – for pretty outputs
  • curl – for shell work with Elastic
  • curator

Not covered


Powered by

Up ↑