Ubuntu Logstash Server with Kibana3 Front End Autoinstall

I have been using Graylog2 and VMware Log Insight for some time now and wanted to try out Logstash finally. So the first thing that I wanted to do was create an automated script to do most of the install and configuration to get everything running. I figured that as I am going through this I would share with everyone and start building on this script more based on feedback. I created a Graylog2 script (located here) that has proven to be of great help to the community and figured I might be able to do the same with the Logstash community, but even if it didn’t I would learn a great deal about Logstash in the meantime. There is a great community around Logstash so getting support should be very easy. As well as, I am just starting to learn Logstash now so this should be a lot of fun. Which also means that there will be a good amount of change around this post.

First off I will be keeping this script updated and available on Github located here. This will be the only location that I will be keeping up with it.

I would recommend using a clean install of Ubuntu 12.04 or 14.04 to install onto. However; if you decide to install on an existing server I am not responsible for anything that may get broken. πŸ™‚

So here is how we get started and get everything up and running. Open up a terminal session on your server that you will be installing to and run the following commands.

For Logstash 1.3.x version: (OUTDATED!!)

sudo apt-get update
sudo apt-get -y install git
cd ~
git clone https://github.com/mrlesmithjr/Logstash_Kibana3
chmod +x ./Logstash_Kibana3/install_logstash_kibana_ubuntu.sh
sudo ./Logstash_Kibana3/install_logstash_kibana_ubuntu.sh

For Logstash 1.4.x version: (CURRENT)

sudo apt-get update
sudo apt-get -y install git
cd ~
git clone https://github.com/mrlesmithjr/Logstash_Kibana3
chmod +x ./Logstash_Kibana3/install_logstash_1.4_kibana_ubuntu.sh
sudo ./Logstash_Kibana3/install_logstash_1.4_kibana_ubuntu.sh

You will be prompted during the script to enter your domain name, vSphere naming convention and PFSense Firewall hostname. These will be used to configure logstash filtering for your ESXi hosts and PFSense Firewall. If you do not monitor any vSphere hosts or use PFSense just enter some random info into these. These are purely just collecting info to pass into a filtering rule for Logstash.

Once complete open your browser of choice and connect to http://logstashservername/kibana or http://ipaddress/kibana.

You will see the following screen once connected. Seeing as we are setting up Logstash with Kibana go ahead and select the link on the left.

19-31-11Screen Shot 2013-11-29 at 6.38.39 PM

Now here is a screenshot of some actual ESXi logging. Notice the tag called VMware, that is created by the filtering rule that we created with the installer which, is based off of the naming convention we passed to the installer.

Logstash_VMware_Dashboard

 

You can grab my VMware dashboard from here.

Here is another screenshot of logging graphs by adding different search criteria items.

10-22-26

So what we have done with this script is installed Apache2, Nginx, Elasticsearch, Logstash and Kibana3. Logstash has been configured to listen on UDP/514 (PFsense, SYSLOG and VMware), TCP/514 (recommended), UDP/514 (syslog devices that cannot be sent to TCP/514) TCP/3515 (Windows Event Logs) and TCP/3525 (Windows IIS Logging).

Now setup your network devices to start sending their syslogs to the HAProxy VIP and if your device supports sending via TCP, use it. Reference the port list below on setting up some of the devices that are pre-configured during the setup.


 

Port List
TCP/514 Syslog (Devices supporting TCP)
UDP/514 Syslog (Devices that do not support TCP)
TCP/1514 VMware ESXi
TCP/1515 VMware vCenter (Windows install or appliance) (For Windows install use NXLog from below in device setup) (For appliance reference device setup below)
TCP/3515 Windows Eventlog (Use NXLog from below in device setup)
TCP/3525 Windows IIS Logs (Use NXLog from below in device setup)

Below is a decent /etc/logstash/logstash.conf file that I am using and will be updating periodically. Some of these settings will be included in the install script but not all of them. You will need to change the naming for ESXi and PFSense for your environment. (Or just use the auto-install script).

For Windows Event Log’s I highly recommend using NXLog for Windows. I am including a fuctional nxlog.conf file for you to use as well with the above logstash.conf configuration.

Here is a screenshot of the Windows Logging if you want use the dashboard view for Windows from here.

Logstash_Windows_Dashboard

Β (OLD)

If you want to purge and expire old logs have a look here. Jordan Sissel (creator of Logstash) has provided a python script to do this.

Here is how you setup the script. Open a terminal on your Logstash server and execute the following.

cd ~
sudo apt-get install python-pip
sudo apt-get install git
git clone https://github.com/logstash/expire-logs
cd expire-logs
sudo pip install -r requirements.txt

Now that you have this setup read the examples on the github link on different scenarios.

After you purge your logs using the above method you will need to restart elasticsearch.

sudo service elasticsearch restart

That should be it.

Enjoy!

All comments and feedback are very much welcomed and encouraged.

Interested in a highly available setup? Go here and checkout the Highly Available ELK (Elasticsearch, Logstash and Kibana) setup.

123 thoughts on “Ubuntu Logstash Server with Kibana3 Front End Autoinstall

    • @n00blet. Once you start sending data to it that error will go away. It is because there is not any syslog data yet into Elasticsearch. πŸ™‚

  1. Having little linux under my belt, but interested in "learning to fish" rather than buying a fish sandwich at McDonalds, I'm interested in digging a little deeper. This post is of great interest but is written for sysadmins with some miles under their belt. I have the new ubuntu VM patched up and ready. I have your slick script run and Logstash installed. How do I:

    1) Configure Apache (or is it already configured?)

    2) "Start sending data to it" so the index error goes away?

    Also, do I need to provide access via SSH or anything to my ESXi hosts for Logstash to be able to pull log info?

    Thanks a ton!

  2. This looks great man, thank you very much for your contribution. I am actually trying out your graylog2 install script as I write this. I'll give this one a try next. This will potential save me quite a bit of time…

  3. Hi, well the centos script didn't work, I have tried it with a fully updated copy of CentOS 6.5. From what I can see it's fails to install git which causes a problem with the ES install and further down there seems to be a problem with the version of passenger, it's trying to get a version which is higher and seemingly works differently. You may be able to solve this by specifying the versions of Passenger via the gem install scripts. I setup a second machine with Ubuntu 12.10 Server and it worked perfectly. The only problem that I can see is that it uses the older version (0.12) of GL2. I am hoping to adapt this to work with the latest release of 0.2.

    • @cultavix Very cool. The CentOS script was actually being maintained by another person that originally got it working so if you want to you can post an issue against the CentOS script on Github and we can get it resolved. The Ubuntu/Debian scripts I maintain on my own. There is a preview script that works for 0.2 and it should be working just fine.

      • (Reading database … 61859 files and directories currently installed.)

        Unpacking elasticsearch (from elasticsearch-0.90.7.deb) …

        dpkg-deb (subprocess): short read on buffer copy for failed to write to pipe in copy

        dpkg-deb: error: subprocess paste returned error exit status 2

        dpkg: error processing elasticsearch-0.90.7.deb (–install):

        short read on buffer copy for backend dpkg-deb during `./usr/share/elasticsearch/lib/lucene-core-4.5.1.jar'

        Errors were encountered while processing:

        elasticsearch-0.90.7.deb

        -getting this error kindly help

  4. i wonder

    output {

    gelf {}

    }

    line 39 of /etc/init.d/logstash

    status_of_proc -p $pid_file "" "$name"

    should be

    status_of_proc -p "$pid_file" "$name"

    nice script πŸ™‚

  5. Great package install ! Reviving my linux skills. This definitely saved me alot of time. I see on your blog about the index issue, there is none due to no data. Do you have a sample file i can implement to test with ? Any info on getting it going is also appreciated. Thank you and keep up the great blog !

  6. Way easier to use than the new graylog2 .20 and installed much faster! Thanks!

    BTW, you have a small typo in your expire directions: phython-pip instead of python-pip.

    Also, i dont really see any information about requirements.txt. That file doesnt exist by default.

    • @MACscr Yeah it really is πŸ™‚ Thanks for catching the typo too! I just updated it to the latest as well. Been spending a lot of time on Graylog2 and neglected to get back to logstash so thanks! And it looks like they have renamed expire-logs to curator so I need to update that too now.

  7. Hi mrlesmithjr it's work great.

    can you import GEOIP in this Script.

    why anything in elasticsearch.yml don't change and anything had # comment?

  8. hi mrlesmithjr

    when I run the pip command see this error

    Could not open requirements file: [Error 2] No such file or directory: 'requirements.txt'

    • @morteza I need to look into the logstash installer and get it updated to the latest version once I get a chance to do it.

  9. This is just great! Got it up and running in no time, and I have pretty limited Linux skills! Got no budget for VMware Log Insight, but this might be an even better alternative! Big thanks!

    • @LB Glad it worked for you. I need to get around to getting this script updated for the latest version of Logstash but they changed the install method and I have not had the time to get to it yet. So stay tuned and Enjoy!

  10. Thanks for the install, work great, just wondering what I need to configure to add additional logs (not streams) for it to analyze?

  11. I added the following to my logstash.conf, but don't see the logs show up in kibana, what am I missing?

    input {

    file {

    path => “/var/log/apache2/access.log”

    type => “apache-access” # a type to identify those logs (will need this later)

    start_position => “beginning”

    }

    }

    filter {

    if [type] == “apache-access” { # this is where we use the type from the input section

    grok{

    match => [ "message", "%{COMBINEDAPACHELOG}" ]

    }

    }

    }

  12. I also had the following output command:

    output {

    elasticsearch_http {

    host => "localhost"

    flush_size => 1

    manage_template => true

    template => "/opt/logstash/lib/logstash/outputs/elasticsearch/elasticsearch-template.json"

    }

    }

    • @LinuxMan here is what I use
      input {
      file {
      path => "/var/log/apache2/*access.log"
      type => "apache"
      sincedb_path => "/var/log/.sincedb"
      }
      }

      filter {
      if [type] == "apache" {
      grok {
      pattern => "%{COMBINEDAPACHELOG}"
      }
      }
      }

  13. Thanks for the reply, I changed my logstash.conf and used the input and filter you posted, but I still don't see the logs in my kibana dashboard. Could it be that my output is wrong?

    output {

    elasticsearch_http {

    host => "localhost"

    flush_size => 1

    manage_template => true

    template => "/opt/logstash/lib/logstash/outputs/elasticsearch/elasticsearch-template.json"

    }

    }

  14. Hi I tried in on a fairly clean 14.04 installation,

    The script seem to go OK, was a little fast at times πŸ˜‰

    I got as far the bit where you say :-

    "Once complete open your browser of choice and connect to http://logstashservername/kibana or http://ipaddress/kibana."

    so I tried it and get an error Message "Upgrade Required Your version of Elasticsearch is too old. Kibana requires Elasticsearch 0.90.9 or above."

    I did a dpkg -l elastic* and I have elasticsearch 1.1.1, I'm guessing that is the problem, any suggestions?

    Great work by the way and thanks for your efforts so far …

    Adrian

    • @adrian elasticsearch 1.1.1 is fine. I saw this same error a while ago and restarted elasticsearch. Then everything was fine. Have you tried that?

      • No, after reboot still get the same error message,

        a netstat -tulpn shows java listening on 9200 (tcp6)

        It was 3:30 am when I posted the original mesage but problem seems to have not gone away after a sleep ( unfortunately )
        I get two alerts
        “Upgrade Required Your version of Elasticsearch is too old. Kibana requires Elasticsearch 0.90.9 or above.”
        And
        ” Error Could not reach http://server3.koolapps.com:9200/_nodes. If you are using a proxy, ensure it is configured correctly”

  15. Went back to basics ( Did a restore to base system (Almost) on my digitalocean droplet ).

    Tried again this time all was fine.

    No idea what was wrong before, but all ok now.

    So to continue with your instructions …

    Thanks for your time … πŸ˜‰

      • @mrlesmithjr, Thank you for your earliest response, I played with it and fixed that problem,

        I am very new to logstash, My question my be foolish,

        Do I need to create separate '.conf ' file for every other product or I can add filter in existing '.conf' file to capture logs.

        Subas

  16. I'm having issues the IIS logs getting parsed out properly from either IIS7 or IIS7.5.

    Below is what my raw output looks like sans identifying information. Any ideas on what I should change?

    {

    "_index": "logstash-2014.06.16",

    "_type": "iis",

    "_id": "RSwjlT4rQTOQTHjfLL4DpQ",

    "_score": null,

    "_source": {

    "message": "Jun 16 16:07:06 ServerName 2014-06-16 21:06:09 1.1.1.1 GET /SFG_BEL_8002747581_Tufts_Health/index.htm agentMediaLegId=us-cs-telephony-voice-7005.iad7-210605UTC-20140616-feca8aa6-dc01-42dd-9523-d1241fc11af0 443 – 3.3.3.3 Mozilla/5.0+(Windows+NT+6.1)+AppleWebKit/537.36+(KHTML,+like+Gecko)+Chrome/35.0.1916.153+Safari/537.36 200 0 0 78rnJun 16 16:07:06 ServerName 2014-06-16 21:06:39 1.1.1.1 GET /Reps/BEL+-+Belvoir/N+E+W++TITLES/MJ+-+Mary+Jane's+Farm/MJ+Cheat+Sheet.docx – 443 – 2.2.2.2 Mozilla/5.0+(Windows+NT+6.1)+AppleWebKit/537.36+(KHTML,+like+Gecko)+Chrome/35.0.1916.153+Safari/537.36 200 0 0 140rnJun 16 16:07:06 ServerName 2014-06-16 21:07:05 1.1.1.1 GET /SFG_DRG_8005826643_Option_1_Annies_Cust_Serv/index.htm agentMediaLegId=us-cs-telephony-voice-25010.iad12-210704UTC-20140616-7a20e00f-f2d1-4649-9a1f-58d16e1b7028 443 – 10.0.2.184 Mozilla/5.0+(Windows+NT+6.1)+AppleWebKit/537.36+(KHTML,+like+Gecko)+Chrome/35.0.1916.153+Safari/537.36 200 0 0 78rn",

    "@version": "1",

    "@timestamp": "2014-06-16T21:06:59.609Z",

    "host": "1.1.1.1:62612",

    "type": "iis",

    "tags": [

    "IISLogs",

    "_grokparsefailure"

    ],

    "@source_host": "%{servername}",

    "@message": "Jun 16 16:07:06 ServerName 2014-06-16 21:06:09 1.1.1.1 GET /SFG_BEL_8002747581_Tufts_Health/index.htm agentMediaLegId=us-cs-telephony-voice-7005.iad7-210605UTC-20140616-feca8aa6-dc01-42dd-9523-d1241fc11af0 443 – 3.3.3.3 Mozilla/5.0+(Windows+NT+6.1)+AppleWebKit/537.36+(KHTML,+like+Gecko)+Chrome/35.0.1916.153+Safari/537.36 200 0 0 78rnJun 16 16:07:06 ServerName 2014-06-16 21:06:39 1.1.1.1 GET /Reps/BEL+-+Belvoir/N+E+W++TITLES/MJ+-+Mary+Jane's+Farm/MJ+Cheat+Sheet.docx – 443 – 2.2.2.2 Mozilla/5.0+(Windows+NT+6.1)+AppleWebKit/537.36+(KHTML,+like+Gecko)+Chrome/35.0.1916.153+Safari/537.36 200 0 0 140rnJun 16 16:07:06 ServerName 2014-06-16 21:07:05 1.1.1.1 GET /SFG_DRG_8005826643_Option_1_Annies_Cust_Serv/index.htm agentMediaLegId=us-cs-telephony-voice-25010.iad12-210704UTC-20140616-7a20e00f-f2d1-4649-9a1f-58d16e1b7028 443 – 10.0.2.184 Mozilla/5.0+(Windows+NT+6.1)+AppleWebKit/537.36+(KHTML,+like+Gecko)+Chrome/35.0.1916.153+Safari/537.36 200 0 0 78rn",

    "geoip": {}

    },

    "sort": [

    1402952819609,

    1402952819609

    ]

    }

      • Yes, I'm using the nxlog config you've got posted and just uncommented the portion pertaining to the IIS logs.

        • Is your hostname REALLY servername as it shows in the log? If it is, I wonder if that is causing the log parsing to not work correctly.

          • No, I sanitized the output to remove any internal information that could be used maliciously.

          • Ok makes sense. I have a new filter that works on your provided raw message.

          • Can you reinstall logstash? Or do you just want a replacement logstash.conf to test out?

          • I added an additional grok pattern to not look for syslog priority in IIS logs.

          • looks like the new filter broke IIS logging for me. I will figure it out and let you know.

          • I'm not getting anything to show up now since the dashboard and the filter change unfortunately.

          • I just figured out the issue with the dashboard; however, I'm still having the parsing issues with the IIS logs despite updated config file.

          • I really appreciate the assistance with this.

            ## Please set the ROOT to the folder your nxlog was installed into,

            ## otherwise it will not start.

            #define ROOT C:Program Filesnxlog

            define ROOT C:Program Files (x86)nxlog

            Moduledir %ROOT%modules

            CacheDir %ROOT%data

            Pidfile %ROOT%datanxlog.pid

            SpoolDir %ROOT%data

            LogFile %ROOT%datanxlog.log

            Module xm_json

            Module xm_syslog

            Module pm_transformer

            OutputFormat syslog_rfc3164

            # Nxlog internal logs

            Module im_internal

            Exec $EventReceivedTime = integer($EventReceivedTime) / 1000000; to_json();

            # Windows Event Log

            Module im_msvistalog

            Query

            *

            *[System[(EventID=4624 or EventID=4776 or EventID=4634 or EventID=4672 or EventID=4688)]]

            *[System[(EventID=1074 or (EventID >= 6005 and EventID <= 6009) or EventID=6013)]]

            *

            Exec $EventReceivedTime = integer($EventReceivedTime) / 1000000; to_json();

            Module im_file

            File "C:inetpublogsLogFilesW3SVC2u_ex*"

            Exec $Message = $raw_event;

            SavePos TRUE

            Recursive TRUE

            Module om_tcp

            Host logstash

            Port 3515

            Module om_tcp

            Host logstash

            Port 3525

            Path internal, eventlog => eventlog_out

            Path IIS_In => transformer => IIS_Out

          • here is what my raw log looks like in IIS. Can you post the top few lines of yours like mine below?

            #Software: Microsoft Internet Information Services 7.5
            #Version: 1.0
            #Date: 2014-06-17 13:34:32
            #Fields: date time s-ip cs-method cs-uri-stem cs-uri-query s-port cs-username c-ip cs(User-Agent) sc-status sc-substatus sc-win32-status time-taken
            2014-06-17 13:34:32 10.0.101.146 GET / – 80 – 10.0.0.139 Mozilla/5.0+(Windows+NT+6.3;+WOW64)+AppleWebKit/537.36+(KHTML,+like+Gecko)+Chrome/35.0.1916.153+Safari/537.36 302 0 0 149356
            2014-06-17 13:34:32 10.0.101.146 GET / – 80 – 10.0.0.139 Mozilla/5.0+(Windows+NT+6.3;+WOW64)+AppleWebKit/537.36+(KHTML,+like+Gecko)+Chrome/35.0.1916.153+Safari/537.36 302 0 0 141780

          • Try reloading now and see if things work better. I changed the nxlog.conf setup as well as the IIS parsing.

          • I have the IIS Logging with NXLog and Logstash IIS Parsing working correctly now. Including breaking up multiple event lines instead of mangling many into one event. Reload logstash using the install script and you should be good to go. Let me know if you still see something not quite right.

          • Yup that seemed to fix it perfectly, kudos and thanks for the help.

            I'm going to play around and see if I can get the agents to display properly.

          • @PlagueFox – Awesome…..No problem at all because your issue uncovered other things as well. Let me know what you find out and share your dashboard once you get it nailed down. Would like to check it out.

    • here is one of my examples that works but note the <13> for priority code. Is there a <XX> priority code at all in yours? Any special logging changed within IIS?
      <13>Jun 17 15:05:08 SOLARWINDS1 2014-06-17 19:05:07 10.0.101.146 GET /Orion/StatusIcon.ashx size=small&entity=Orion.Groups&id=10&status=3 80 EVERYTHINGadministrator 10.0.0.139 Mozilla/5.0+(Windows+NT+6.3;+WOW64)+AppleWebKit/537.36+(KHTML,+like+Gecko)+Chrome/35.0.1916.153+Safari/537.36 200 0 0 17
      <13>Jun 17 15:05:08 SOLARWINDS1 2014-06-17 19:05:07 10.0.101.146 GET /Orion/StatusIcon.ashx size=small&entity=Orion.Groups&id=11&status=3 80 EVERYTHINGadministrator 10.0.0.139 Mozilla/5.0+(Windows+NT+6.3;+WOW64)+AppleWebKit/537.36+(KHTML,+like+Gecko)+Chrome/35.0.1916.153+Safari/537.36 200 0 0 59

  17. Have you checked out PacketBeat? Seems like a very useful tool and also uses kibana and elasticsearch. The author told me it should be able to use the same elasticsearch and kibana install that logstash uses too, though I havent tried them combined yet.

    • @Mark – A great find. I have never tried it out but I am actually installing now on all of my Linux servers to see what it can do. Running it on the same ELK stack as logstash too so we will see what happens.

        • @Mark- So far so good πŸ™‚ However it did not work initially until they changed a bug in their app to not only look for ES running on localhost when installing the packetbeat agent. But that has been resolved and works great now!

  18. Brilliant script, thanks for your effort :).

    However it appears as if my Ubuntu server refuses to accept incoming data, pfSense receives the following response from my ELK server (192.168.0.2):

    IP 192.168.0.2 > "PF-Sense IP": ICMP 192.168.0.1 udp port 514 unreachable, length 452

    Could you provide a basic overview of the actual components that should be installed by the logstash1.4 version of the script.

    Maybe I understood the logstash documentation incorrectly I was however under the impression that logstash should be able to receive syslog input directly throught a tcp syslog input in logstash.conf.

    When checking with netstat I don't see anything LISTENING on port 514.

    I guess my question is if I do need to install a seperate syslog collector, route the syslog through redis or check the firewall on the ELK server?

    • @MKO – Looking back at the script it appears that I removed the original UDP/514 listener from the logstash.conf. I have updated the script but you can add the following to your /etc/logstash/logstash.conf underneath the input tcp syslog section and then restart the logstash service. Hope this helps.
      input {
      udp {
      type => "syslog"
      port => "514"
      }
      }

      restart logstash
      sudo service logstash restart

      • Thanks, I already tried playing arround with the udp syslog inputs, but it did not resolve the issue. I can see the packets arriving at the server through wireshark but the data is not accepted or processed, need to look into it some more tomorrow.

        • Is the section for parsing pfsense name correct? reverse dns working etc? If the below section is not correct it will not work.
          if [syslog_hostname] =~ /.*?(nsvpx).*?(everythingshouldbevirtual.local)?/ {
          mutate {
          add_tag => [ "Netscaler" ]
          }
          }
          if [syslog_hostname] =~ /.*?(pfsense).*?(everythingshouldbevirtual.local)?/ {
          mutate {
          add_tag => [ "PFSense" ]
          }
          }

          • Your modified script does listen for incoming syslog traffic and iptraf shows that I am indeed receiving syslog traffic. When stopping the logstash service iptraf displays the port unavailable messages which are returned to the pfsense router.

            Running netstat -a displays the following line:

            udp6 0 0 [::]:syslog [::]:*

            apparently it is only binding to upd6, I read somewhere that the OS should be able to map this to udp internally but I am wondering if the OS should do this by default.

            Should I look into ways to have java prefer ipv4 bindings, at least I think that is where a potential problem could be?

            The second potential problem could be the parsing of the received logs as you have already indicated. Name resolution is working flawless and on first glance the hostnames in the script are correct.

          • You can address this two different ways. I will probably change the script to use rsyslog to capture UDP/514 and redirect back to logstash on TCP/514, this is how I do it on large scale installs. The standalone install will work the same. You could also change the java settings for the logstash service but…rsyslog would be easier. The java setting is -Djava.net.preferIPv4Stack=true in case you want to try that. However to use rsyslog run the following after commenting out the input for UDP/514 in /etc/logstash/logstash.conf and restarting logstash. sudo service logstash restart

            sudo bash
            sed -i -e 's|#$ModLoad imudp|$ModLoad imudp|' /etc/rsyslog.conf
            sed -i -e 's|#$UDPServerRun 514|$UDPServerRun 514|' /etc/rsyslog.conf
            echo '*.* @@'localhost'' | tee -a /etc/rsyslog.d/50-default.conf
            service rsyslog restart

  19. Just to say, I've been using this for a few months now with now problem when sudenly I lossed acces to the Digital Ocean Droplet running as Kibana/Logstash server.

    It seems there is a bad default value in ElasticSearch config files which allow remote code execution, my system had been compromised and DigitalOcean disabled the outbour network port.

    They where very helpful and I now have eveything back upo and running ( good thing theese backups ).

    They pointed me tio this webpage which list the problem and how to fix it …
    http://bouk.co/blog/elasticsearch-rce/

    • @Adrian – Thanks for letting me know. I have implemented the fix. I read about this setting a while ago but obviously did not set it. But the script now has it added.

  20. it says the installation has completed and i can browse http://ipaddress/kibana. But i am not being able to browse it, it says 404 page not found. I dont even see the kibana folder being made in the /var/www directory. what might be the problem? please help.

  21. @mrlesmithjr Can you explain why you install Redis? As far as I understand logs go directly to Logstash and then are stored for 90 days in Logstash. Redis sits and listens on port 6379, but does nothing?

  22. Hello kind sir. I have been trying to get this auto install script running and I am having a few problems. I attribute it to my lack of Ubuntu/Linux command line ninja skills. My network needs to have proxy settings added to the /etc/environment and wget config settings to get most of the script working but I have ran into a snag. I get to the part where the script installs plugins and it stops because I get this message:

    Failed to install royrusso/elasticsearch-HQ, reason: failed to download out of all possible locations…, use –verbose to get detailed information

    I am assuming that I need to add my proxy settings somewhere but I do not know where to add the proxy settings for this particular part of the script.

    Any help would be greatly appreciated.

    -RB

    • i had the same problem some time ago but with a dirty instalation of ubuntu, today i install an ubuntu VM just with OpenSSH, configuring the hosts file: sudo nano /etc/hosts –> change "127.0.1.1 SERVER" by "127.0.1.1 server.domain.com " and no more. execute the script and magically everything goes ok… EUREKA!!!

  23. I've tried by following some steps in your structions, but nothing has happened!
    Anyone can help me? Many thanks!

    "……………………
    abc@XYZ:~$ git clone https://github.com/mrlesmithjr/Logstash_Kibana3
    Cloning into 'Logstash_Kibana3'…
    remote: Counting objects: 1036, done.
    Receiving objects: 100% (1036/1036), 201.34 KiB | 116.00 KiB/s, done.
    remote: Total 1036 (delta 0), reused 0 (delta 0)
    Resolving deltas: 100% (658/658), done.
    Checking connectivity… done.

    abc@XYZ:~$ ls -la
    total 65308
    drwxr-xr-x 5 juniper juniper 4096 Nov 19 21:44 .
    drwxr-xr-x 3 root root 4096 Nov 18 15:45 ..
    -rw——- 1 juniper juniper 4165 Nov 18 16:23 .bash_history
    -rw-r–r– 1 juniper juniper 220 Nov 18 15:45 .bash_logout
    -rw-r–r– 1 juniper juniper 3637 Nov 18 15:45 .bashrc
    drwx—— 2 juniper juniper 4096 Nov 18 15:46 .cache
    -rw-rw-r– 1 juniper juniper 1069929 Apr 11 2014 kibana-3.0.1.tar.gz
    -rw-rw-r– 1 juniper juniper 1074306 Nov 7 22:15 kibana-3.1.2.tar.gz
    drwxrwxr-x 5 juniper juniper 4096 Nov 19 21:44 Logstash_Kibana3
    -rw——- 1 root root 69 Nov 18 16:18 .nano_history
    -rw-r–r– 1 juniper juniper 675 Nov 18 15:45 .profile
    -r–r–r– 1 juniper juniper 64678021 Nov 18 16:15 VMwareTools-8.6.5-621624.tar.gz
    drwxr-xr-x 7 juniper juniper 4096 Feb 15 2012 vmware-tools-distrib
    abc@XYZ:~$ chmod +x ./Logstash_Kibana3/install_logstash_1.4_kibana_ubuntu.sh
    abc@XYZ:~$ sudo ./Logstash_Kibana3/install_logstash_1.4_kibana_ubuntu.sh
    abc@XYZ:~$
    ………………"

  24. I like the script very nice. I just wanted to ask if you new how to stop it from showing the logs of the system it’s running on. It keeps logging my machine but I wanted it to log just my routers. I can’t seem to find it in your configs. Can you please help?

    • The easiest way would be to just create a new dashboard in Kibana to only show your router devices and exclude everything else.

      • filter {
        if [@source_host] == “foobarhost” {
        drop { }
        }
        }
        I figured it out. Just add this to the bottom og the logstash.conf file and it removes what I don’t want to see.

  25. Larry, great site and thanks for the autoinstall scripts. Works great! I have my windows server sending logs to the ELK server.

    I’m trying to add a filter for my Sonicwall based on the filters from this site below, but I’m having some issues.

    http://itsanity.blogspot.com/2014/04/monitoring-vpn-logins-for-with-logstash.html

    I’ve modified the logstash.conf and added the following:

    # Sonicwall input
    input {
    syslog {
    type => “Sonicwall”
    port => “5514”
    }
    }

    # Settting up Sonicwall parsing
    filter {
    if [type] == “Sonicwall” {
    kv {
    exclude_keys => [ “c”, “id”, “m”, “n”, “pri”, “proto” ]
    }
    grok {
    match => [ “src”, “%{IP:srcip}:%{DATA:srcinfo}” ]
    }
    grok {
    match => [ “dst”, “%{IP:dstip}:%{DATA:dstinfo}” ]
    }
    grok {
    remove_field => [ “srcinfo”, “dstinfo” ]
    }
    geoip {
    add_tag => [ “geoip” ]
    source => “srcip”
    database => “/opt/logstash/vendor/geoip/GeoLiteCity.dat”
    }
    }
    }

    After this, I restart the logstash service and nginx. Run “netstat -nltup” to make sure ports are correctly binded.

    Even though logging is configured to send to the ELK server on port 5514, I don’t send any host entry for it. Is there a conflicting filter that’s causing this?

  26. Excellent tool. I really appreciate the work you have done. I’m running into a problem with ESXi. Setting up the host to log to my server using tcp://logstashurl:1514 as suggested doesn’t work. TCP dump on the logstash server shows two messages coming after configuration, then nothing. If I change it to port UDB:514, the messages come pouring in but of course are not handled by by your configuration. I tried several variations including TCP/UDP and 514/1514. The only combination that actually shows messages coming from esxi in TCPDump is UDP:514
    I am running ESXi 5.1. It was a clean Ubuntu 14.04 install with just apt-get upgrade done before following your instructions. Any ideas?

    • I have done some more digging. This is what I get in the /var/log/.vmsyslogd.err file.
      2015-02-27T18:22:14.238Z vmsyslog.loggers.network : ERROR ] Socket init calls failed
      2015-02-27T18:22:14.239Z vmsyslog.loggers.network : ERROR ] Can not connect to 172.19.2.9:1514 – disabled

      I get two packets from the esxi server on the logstash server but they contain no readable information.

        • Is the firewall port open for tcp/1514 on the host(s)? Seems like 5.1 was a little tricky possibly? But I have about 50 5.1 hosts in prod. using this same config. May spin up a nested instance and see for myself in the next few days.

  27. I followed the directions to enable syslog in Security Profile which includes 514 and 1514. It seems that it must be something on the logstash server since 2 packets do come in so it isn’t that nothing is coming from the esxi server. It seems that the port isn’t open on the logstash server. Or possibly it isn’t bound properly so nothing is responding. Unfortunately, my linux ability hasn’t provided me the skill to find it yet.

    Stephen

  28. Great script. But I guess since I’m so new to Linux I’m doing something wrong. When I execute the command ./Logstash_Kibana3/install_logstash_1.4_kibana_ubuntu.sh

    I get the following error:
    ggraham@pdc-quvs-ubuntu01:/Logstash_Kibana3$ sudo ./install_logstash_1.4_kibana_ubuntu.sh
    : not foundogstash_1.4_kibana_ubuntu.sh: 1: ./install_logstash_1.4_kibana_ubuntu.sh: ls
    ./install_logstash_1.4_kibana_ubuntu.sh: 2: ./install_logstash_1.4_kibana_ubuntu.sh: ggraham@pdc-quvs-ubuntu01:/Logstash_Kibana3$: not found
    ./install_logstash_1.4_kibana_ubuntu.sh: 3: ./install_logstash_1.4_kibana_ubuntu.sh: Password:: not found
    ./install_logstash_1.4_kibana_ubuntu.sh: 4: ./install_logstash_1.4_kibana_ubuntu.sh: root@pdc-quvs-ubuntu01:/Logstash_Kibana3#: not found
    ./install_logstash_1.4_kibana_ubuntu.sh: 5: ./install_logstash_1.4_kibana_ubuntu.sh: root@pdc-quvs-ubuntu01:/Logstash_Kibana3#: not found
    ./install_logstash_1.4_kibana_ubuntu.sh: 6: ./install_logstash_1.4_kibana_ubuntu.sh: root@pdc-quvs-ubuntu01:/Logstash_Kibana3#: not found
    ./install_logstash_1.4_kibana_ubuntu.sh: 7: ./install_logstash_1.4_kibana_ubuntu.sh: root@pdc-quvs-ubuntu01:/Logstash_Kibana3#: not found
    : not foundogstash_1.4_kibana_ubuntu.sh: 8: ./install_logstash_1.4_kibana_ubuntu.sh: exit
    ./install_logstash_1.4_kibana_ubuntu.sh: 9: ./install_logstash_1.4_kibana_ubuntu.sh: ggraham@pdc-quvs-ubuntu01:/Logstash_Kibana3$: not found
    : not foundogstash_1.4_kibana_ubuntu.sh: 10: ./install_logstash_1.4_kibana_ubuntu.sh: logout

    I put the install script in the following directory: /Logstash_Kibana3
    I’m running this on a fresh install of Ubuntu Ubuntu 14.04.2 LTS
    Any insight you can provide would be helpful.

  29. I started over whit a fresh install and followed your steps:

    For Logstash 1.4.x version: (CURRENT)
    sudo apt-get update
    sudo apt-get -y install git
    cd ~
    git clone https://github.com/mrlesmithjr/Logstash_Kibana3
    chmod +x ./Logstash_Kibana3/install_logstash_1.4_kibana_ubuntu.sh
    sudo ./Logstash_Kibana3/install_logstash_1.4_kibana_ubuntu.sh
    I was not prompted for anything and when I went to http://ipaddress/kibana the page would not display. What am I missing here?

    g3

    • @g3 – I will do some checking and see if there is something wrong with the install script potentially. You are installing on Ubuntu correct?

      • Yes I’m using Ubuntu

        Linux pdc-quvs-ubuntu01 3.16.0-30-generic #40~14.04.1-Ubuntu SMP Thu Jan 15 17:43:14 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux

        I would not be surprised if it was me. I’m still very new at this. Thanks for the script. I learned a lot running it line by line. My next task is adding your custom dashboard to my ELK server.

        Thanks again.

  30. Hello, Ive been trying to get logstash and windows server to play together for a day or two when I finally came across your blogg. The Windows server logs great to the logstash and kibana works fine! Thank you for that. The thing iam wondering about is how to log my ubuntu severs. I have only done the tutorial from digitalocean before and there it is only to install logstash forwarder and set the correct host, Will it be the same procedure here? Iam not able to test yet so I figured Ill ask you before.

    Regards JN

    • @Jonatan – In regards to your ubuntu servers you can just setup /etc/rsyslog.d/50-default.conf and add something like the following to the end of the file.

      *.* @@logstash.everythingshouldbevirtual.local

      Then restart rsyslog
      sudo service rsyslog restart

      However if you have specific needs for applications specific logs then you will need to install an instance of logstash on those servers and configure it to ship your application logs.

      Hope this helps!

  31. I use the auto-install script for ELK and i had installed nxlog on windows i changed the file nxlog.conf i restart the service logstash but i can’t visualised my logs windows on kibana !! pleez any help ??

Leave a Reply

Your email address will not be published. Required fields are marked *

*