2021. október 3., vasárnap

How to recover data from unmountable Mac apfs drive

If you are unable to mount a Mac APFS drive and the built in tools don't help, you can ask the assistance of Stellar data recovery technician for mac.

With this tool you can scan the drive and you ahve high chance of recovering data and saving from the volume.

If you see the folder empty where you stored your very important data, run the "Deep scan" which might take a while, but most probably will show you your files

If it doesn't help you might want ot find a company which can restore data from corrupted SSD

2016. május 27., péntek

How to set the maximum amount of memory what a Linux process can use


If I want to let a python process to use maximum the 90% of the total memory available, I'm using this command:

ulimit -Sv $((`python3 -c'import psutil;print(psutil.phymem_use().total)'`*9/10/1024)); python3 mystuff.py

With ulimit you can set the limit parameter for the current session.

To dump the corrent config, simple run the following command:

>ulimit -a
core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 63785
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 1024
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 63785
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited


You can change the values with the -S and use the param argument from the brackets. 
In this case we set the virtual memory parameter with -v

To get the total available memory, you have multiple options:
  1. Option
    cat /proc/meminfo | awk '/MemTotal/ {print $2}'
  2. Option
    free | awk '/^Mem:/ {print $2}'
  3. Option
    python3 -c 'import psutil; print(psutil.phymem_use().total)
Please keep in mind, you need to change this value to KB, 
Examples in a 16 GB RAM machine:
>cat /proc/meminfo | awk '/MemTotal/ {print $2}'
16351304
>free | awk '/^Mem:/ {print $2}'
16351304
>python3 -c 'import psutil; print(psutil.phymem_usage().total)'
16743735296

As you can see, in the last option, it is in bytes, so need to divide with 1024. 

If your application reaches the limit, it will get a signal 6 - Abort signal from the Linux.

Notes:
  • You can use bc to calculate the value to set, but in this example it was not available, so simple bash calculation was used
  • You can calculate the value with python, but this example keeps the chance to replace that part.

2015. december 8., kedd

[SOLVED] Cent OS 7 ImportError: No module named cfnbootstrap

If you are trying to use Cloudformation init on CentOS7 or RedHat7, you may meet the following issue:
# /opt/aws/bin/cfn-init 
Traceback (most recent call last):
  File "/opt/aws/bin/cfn-init", line 19, in <module>
    import cfnbootstrap
ImportError: No module named cfnbootstrap

Solution:
Validate, the cfnbootstrap is available by:
find / -name cfnbootstrap
/usr/lib/python2.7/dist-packages/cfnbootstrap

It means, the module is installed, but during import, python can not find it. 
Let's check the PYTHONPATH parameter, which defines the folders where to check for modules.

This is a simple way to list the directories:
# python2.7
Python 2.7.5 (default, Jun 24 2015, 00:41:19) 
[GCC 4.8.3 20140911 (Red Hat 4.8.3-9)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> /dist-packages
KeyboardInterrupt
>>> import sys
>>> print '\n'.join(sys.path)

/usr/lib64/python2.7/site-packages/gevent-1.0.1-py2.7-linux-x86_64.egg
/usr/lib/python2.7/site-packages/boto-2.36.0-py2.7.egg
/usr/lib/python2.7/site-packages/python_binary_memcached-0.24-py2.7.egg
/usr/lib64/python27.zip
/usr/lib64/python2.7
/usr/lib64/python2.7/plat-linux2
/usr/lib64/python2.7/lib-tk
/usr/lib64/python2.7/lib-old
/usr/lib64/python2.7/lib-dynload
/usr/lib64/python2.7/site-packages
/usr/lib/python2.7/site-packages
>>> 
As you can see, the module's containing folder (/usr/lib/python2.7/dist-packages/) is not listed here.

Now you have two options to solve this:
  1. Create a symbolic linc between the folders:
    ln -s /usr/lib/python2.7/dist-packages/cfnbootstrap /usr/lib/python2.7/site-packages/cfnbootstrap 
  2. Extend the PYTHONPATH parameter by adding the following line to /etc/environments:
    export PYTHONPATH="${PYTHONPATH}: /usr/lib/python2.7/dist-packages/"

I hope it helps.

2015. november 8., vasárnap

Scripting LVM Snapshot Backups

In this article I will show you how you can simply backup an LV.
In this example we are going to script a mail server's mailbox store LVM backup.

The LVM backup script


#!/bin/bash
SIZE=1G            # Size of a snapshor. Make sure, there is space for all 7 snapshot
VGNAME=/dev/vg00    # /dev/VGNAME
LVTOBACKUP=www        # only the name of LV
DAY=`date +%a`
SNAPSHOTNAME="$LVTOBACKUP-$DAY"

# ---------------------- Do not edit under this line  -------------------

SCRIPTNAME=$0
SENDMONITORINGDATA=true
FAILOCCURED=0
# Detailed log to syslog
#exec > >(logger -t "$SCRIPTNAME" -p local3.info ) 2> >(logger -t "$SCRIPTNAME" -p local3.info)
set -x
ZBXSENDER=`which zabbix_sender`
if [ $? -ne 0 ]
then
    SENDMONITORINGDATA=false
    echo "Not sending data to Zabbix"
fi
ZBXCONFIG=`find /etc/zabbix/ -maxdepth 1  -name '*.conf' | tail -n 1`
sendMonitoringData(){
    if [ "$SENDMONITORINGDATA" = true ]
    then
        echo "Sending monitoring data: $SCRIPTNAME - $1"
        $ZBXSENDER --config $ZBXCONFIG --key scripterror --value $1
        $ZBXSENDER --config $ZBXCONFIG --key scriptname --value $SCRIPTNAME
    fi
}
checkResult(){
    EXITSTATUS="$1"
    shift
    MSG="$*"
    if [ $EXITSTATUS -eq 0 ]
    then
        echo "$MSG - OK"
    else
        echo "$MSG - FAIL"
        sendMonitoringData $EXITSTATUS
        FAILOCCURED=1
    fi
}
lvs "$VGNAME/$SNAPSHOTNAME" 2>&1 > /dev/null
RETCODE=$?
checkResult $RETCODE "Failed to check $VGNAME/$SNAPSHOTNAME"
if [ $RETCODE -eq 0 ]
then
    lvremove -f "$VGNAME/$SNAPSHOTNAME"
    checkResult $? "Failed to remove $VGNAME/$SNAPSHOTNAME"
fi
sync
lvcreate -L "$SIZE" -s -n "$SNAPSHOTNAME" "$VGNAME/$LVTOBACKUP"
checkResult $?  "Failed to create $VGNAME/$SNAPSHOTNAME"
if [ $FAILOCCURED -eq 0 ]
then
    sendMonitoringData 0
else
    echo "Due to failure no more alert to monitoring"
fi


Explanation


It seems a bit complicated script, let me explain:

LVM part

The basic backup script looks like this
 #!/bin/bash
SIZE=1G            # Size of a snapshot. Make sure, there is space for all 7 snapshot
VGNAME=/dev/vg00    # /dev/VGNAME
LVTOBACKUP=mail        # only the name of LV
DAY=`date +%a`
SNAPSHOTNAME="$LVTOBACKUP-$DAY"

# ---- 
lvs "$VGNAME/$SNAPSHOTNAME" 2>&1 > /dev/null
RETCODE=$?
if [ $RETCODE -eq 0 ]
then
    lvremove -f "$VGNAME/$SNAPSHOTNAME"
fi
sync
lvcreate -L "$SIZE" -s -n "$SNAPSHOTNAME" "$VGNAME/$LVTOBACKUP"


For LVM snapshot you need to define the snapshot Logical Volume size.
It depends on the application witch data you are backing up.
After the initialization period you can check the used amount of Snapshot space like this:
# lvs
  LV       VG   Attr   LSize  Origin Snap%  Move Log Copy%  Convert
  mysql    vg00 -wi-ao  5,00g                                     
  swap     vg00 -wi-ao  4,00g                                     
  mail     vg00 owi-ao 30,00g                                     
  mail-Sun vg00 swi-a-  1,00g mail    0,70

You can see that, it uses the 0.7% of the configured size. I could configure smaller size...

I'm using daily backup, it creates a new LV snapshot every day. The name is came from the original Logical Volume's name and the sort version of day of week.

The script first check whether the last weeks LVM snapshot is available. If exist, then first it removes that.
Then the last lines are coming:
sync
lvcreate -L "$SIZE" -s -n "$SNAPSHOTNAME" "$VGNAME/$LVTOBACKUP"

Sync is important, because we need the data on disk, not in cache ;-)

Monitoring

You can see that, there are more lines connecting to monitoring than for the backup.
To make sure, this script can be used on different distributions and different versions, the path are not burned into the code.
Every step of the backup process is monitored and reported to Zabbix and of course to syslog.

Please feel free to use it.

All feedback are welcome.

2015. szeptember 28., hétfő

[SOLVED] Python Cannot assign requested address

Python exception

Traceback (most recent call last):
  File "/usr/lib/python2.7/multiprocessing/process.py", line 258, in _bootstrap
    self.run()
  File "/srv/sbproxy/sandboxproxytest/framework/multiprocess.py", line 39, in run
    res = self._function(*task)
.
.
.
    return requests.request(method, url, **kwargs)
  File "/usr/lib/python2.7/dist-packages/requests/api.py", line 44, in request
    return session.request(method=method, url=url, **kwargs)
  File "/usr/lib/python2.7/dist-packages/requests/sessions.py", line 455, in request
    resp = self.send(prep, **send_kwargs)
  File "/usr/lib/python2.7/dist-packages/requests/sessions.py", line 558, in send
    r = adapter.send(request, **kwargs)
  File "/usr/lib/python2.7/dist-packages/requests/adapters.py", line 378, in send
    raise ConnectionError(e)
ConnectionError: HTTPConnectionPool(host='172.31.37.5', port=8080): Max retries exceeded with url: /jobs/list?after=2015-09-27 (Caused by <class 'socket.error'>: [Errno 99] Cannot assign requested address)


Check netstat | grep -c tcp 
When the expection happens, the above commans show around 28000.
Reason can be seen here:
http://stackoverflow.com/questions/11190595/repeated-post-request-is-causing-error-socket-error-99-cannot-assign-reques
If the connections are in CONNECTION_WAIT then the server closed the connection earlier than should, but if the connection is in TIME_WAIT state, that is normal:

RFC 793 sets the TIME-OUT to be twice the Maximum Segment Lifetime, or 2MSL. Since MSL, the maximum time a packet can wander around Internet, is set to 2 minutes, 2MSL is 4 minutes. Since there is no ACK to an ACK, the active closer can't do anything but to wait 4 minutes if it adheres to the TCP/IP protocol correctly, just in case the passive sender has not received the ACK to its FIN (theoretically).

Some explanation:
http://serverfault.com/a/329846

Solution

Set the tcp_tw_recycle to 1
echo 1 >   /proc/sys/net/ipv4/tcp_tw_recycle
Or add to /etc/sysctl.conf:
net.ipv4.tcp_tw_recycle = 1
and reload the configuration like this:
sysctl -p 

Some documentation advise to use /proc/sys/net/ipv4/tcp_tw_reuse, but in my case it has not solved this issue.

2015. augusztus 9., vasárnap

Webszerver biztonsági kérdései



Ebben a bejegyzésben néhány általános hibát mutatok be, ami egy webszerver elleni támadásban megkönnyíti a támadó dolgát, de egyszerűen orvosolható lenne.

Sok cég egyszerűen nem fordít figyelmet a webszervere biztonságossá tételére, hiszen "webszervert mindenki tud üzemeltetni".

Lehet úgy csinálni, hogy működjön és lehet úgy is, hogy biztonságosan működjön, jellemzően a "csak működjön" a cél.

Íme a problémák:

Szükségtelen header paraméterek 

Az első szabályok között van a szerver biztonsági szabályok oktatása során, hogy ne áruljuk el, hogy milyen kiszolgáló melyik verziója végzi a szolgáltatást.

Server

Szinte minden webszerver automatikusan hozzáadja a nevét és a verziószámát a HTTP válaszhoz. Ez jó nekik, hiszen jobban szerepelnek az ilyen jellegű felméréseken, viszont egy támadót is kényelmes helyzetbe hoz, mivel kéretlenül átadja a nevét és verziószámát.
Egy nem frissített rendszernél ennél többre nincs is szükség, hiszen a kiszolgáló típusa és verziószáma alapján egyszerűen ki lehet keresni a sérülékenységeit (pl https://www.exploit-db.com/)  és már mehet is a támadás. Ahány webszerver, annyi módja a kikapcsolásnak.

NginX 
Javasolt az nginx-extras csomag használata, mivel ki tudja venni a Server headert:
        more_clear_headers 'Server';
Ettől függetlenül a /etc/nginx/nginx.conf-ban a server_tokens paramétert érdemes off- ra állítani.

Apache

/etc/apache2/conf.d/security fájlban két módosítás javasolt:
 ServerTokens Prod
ServerSignature Off
Ezzel csak a verziót takarjuk el, de a header megmarad, ezért a domain-hez tartozó konfigot érdemes kiegészíteni a következővel:
Header always unset "Server

HAproxy
A HAproxy ugyan nem erre való, de terheléselosztáson kívül is rengeteg dologban tud segíteni. A header-ek manipulálása is hozzátartozik.
A server header eltávolítható vele, a vonatkozó backend szakasz következő sorral való kiegészítésével:
 rspidel ^Server:.*
Érdemes ezt a webszervereknél konfigurálni, ne foglaljuk a loadbalanszert ilyen dolgokkal.

X-Powered-By

Az előzőhöz hasonló céllal jöhetett létre, arról ad információt, hogy a webalkalmazás miben készült, melyik verzió futtatja.

PHP-ban a megfelelő php.ini  expose_php sorát off-ra állítva kapcsolható ki:

expose_php = Off

Amennyiben erre nincs lehetőség, akkor a webszerverre érdemes bízni.

Session ID

Az előzőhöz kapcsolódik. Ha lehet,  azt sem kell a felhasználók orrára kötni, hogy az alkalmazás miben íródott. 

PHP-ben a php.ini session.name paraméter határozza meg, hogy a session cookie milyen névvel jelenjen meg a klienseknél. Ez alapesetben PHPSESSID, érdemes kihagyni a PHP-t a karakterláncból.

Természetesen az URL-ből is kiderülhet, hogy miben írodott az alkalmazás, SEO barát URL-t feltételezünk ebben az esetben...

Hibaoldalak

Webszervertől függően a 404,403, stb státuszokhoz tartozó statikus HTML fájlok is tartalmazzák a webszerver típusát.
A barátságosabb felhasználó értesítésen kívül mégegy ok saját hiba oldalak (Error page) használatára

Szükségtelen modulok

A webszerverek általában előre definiált modul csomaggal érkeznek, amelyek az általános igényeket és a minnél kevesebb konfigurációt tartják  szem előtt. Ennek azonban van hátulütője is, hiszen egy bekapcsolt Apache autoindex modul olyan információkhoz is hozzáférést enged, amihez nem feltétlenül szeretnénk hozzáférést engedni. Pl CMS-ek modules, templates, egyéb könyvtárának listája.
Amennyiben nincs rá szükség, érdemes kikapcsolni.
Apache esetén a2dismod autoindex


PHPmodulok

Nem csak  a webszerverekre érvényesek az előző gondolatok, amire nincs feltétlenül szükség, azt érdemes kikapcsolni.
Egy egyszerű és kényelmes eszköz erre: LAMPSecurityToolkit

Védekezés a feltérképezések ellen

Érdemes belenézni a webszerver logba időnként, hogy lássuk milyen kérések érkeznek, amire a kiszolgálónk 404 - nem található vagy 403 - forbidden üzenettel válaszol.
Pl:
[09/Aug/2015:12:11:35 +0200] "GET /MyAdmin/scripts/setup.php HTTP/1.1" 404 56 "-" "ZmEu"
[09/Aug/2015:12:11:36 +0200] "GET /scripts/setup.php HTTP/1.1" 404 56 "-" "ZmEu"
[09/Aug/2015:12:11:37 +0200] "GET /db/scripts/setup.php HTTP/1.1" 404 56 "-" "ZmEu"
[09/Aug/2015:12:11:38 +0200] "GET /dbadmin/scripts/setup.php HTTP/1.1" 404 56 "-" "ZmEu"
[09/Aug/2015:12:11:39 +0200] "GET /myadmin/scripts/setup.php HTTP/1.1" 404 56 "-" "ZmEu"
[09/Aug/2015:12:11:39 +0200] "GET /mysql/scripts/setup.php HTTP/1.1" 404 56 "-" "ZmEu"
[09/Aug/2015:12:11:40 +0200] "GET /mysqladmin/scripts/setup.php HTTP/1.1" 404 56 "-" "ZmEu"
[09/Aug/2015:12:11:41 +0200] "GET /phpadmin/scripts/setup.php HTTP/1.1" 404 56 "-" "ZmEu"
[09/Aug/2015:12:11:42 +0200] "GET /phpMyAdmin/scripts/setup.php HTTP/1.1" 404 56 "-" "ZmEu


 Az ilyen jellegű kéretlen próbálkozások ellen célszerű felokosítani a fail2ban-t, ami az ilyen próbálkozóktól távol tart bennünket. Íme pléda: Apache 404 
Ezzel azért körültekintően kell bánni, hiszen, ha a weboldalunkból hiányzik valamilyen fájl, az tud felesleges kitiltást eredményezni...


Ellenőrzés fájl kiterjesztés szerint

Előfordul, amikor egy webalkalmazáson valamilyen code injection sérülést találnak, de a javításig a szolgáltatást fenn kell tartani.
Pl: Egy sérülékenység eredményeként php fájlokat hoztak létre a támadók egy cache könyvtárban:
# ls -1 cache/*.php
cache/blog27.php
cache/dirs2.php
cache/start65.php
cache/template39.php


Nem kényelmes helyzet, de az alábbi konfigurációval átmenetileg megmenekülünk a spam áradattól (amit feltehetőleg ezek a php fájlok generálnak):

NginX:
location ~ /cache/(.+)\.php$ {
        deny all;
 }

Remélem hasznosnak bizonyulnak a fenti javaslatok, a véleményeket szívesen fogadom.

2014. július 30., szerda

Citrix XenServer 6.2 Zabbix Agent installation

Install packages:

rpm -ivh http://repo.zabbix.com/zabbix/2.0/rhel/5/i386/zabbix-release-2.0-1.el5.noarch.rpm
rpm -ivh http://repo.zabbix.com/zabbix/2.0/rhel/5/i386/zabbix-2.0.12-1.el5.i386.rpm
rpm -ivh http://repo.zabbix.com/zabbix/2.0/rhel/5/i386/zabbix-agent-2.0.12-1.el5.i386.rpm

Add Zabbix agent to services:

echo -e "zabbix\t\t10050/tcp\t\t\t#Zabbix-Agent"  >> /etc/services

Enable the traffic (iptables rule)

lokkit
- customize
- other services: zabbix:tcp

Check whether the traffic is already enabled:

iptables-save  | grep 10050

Set the Zabbix server IP/DNS:

sed -i 's/Server=127.0.0.1/Server=monserver/' /etc/zabbix/zabbix_agentd.conf

Restart the agent:

/etc/init.d/zabbix-agent restart

Done

2014. január 5., vasárnap

Dlink DNS 323 fan always runs

I have a Dlink DNS 323 NAS with original firmware.
The device works great but sometimes the fan runs continously but no reason, because the temperature is not high.

When I login into the administration page and set the fan control to "always high" then save the configuration, the fan switch to maximum RPM.
After that I set the fan control to off/ low/high and save the config, the fan switchs off. After that the fan control works good for a period again.
It switch to low and switch off depending the temperature.

I've wrote a small script which can help me to resolve this issue. It can be run manually or with some modification from cron.

Here is the script:

#!/bin/bash

USER="admin"
echo -n "Admin pasword: "
read PASS
echo -n "NAS IP: "
read IP
echo "Logging in..."
wget -q --save-cookies cookies.txt --post-data 'f_LOGIN_NAME=$USER&f_LOGIN_PASSWD=$PASS&Config_Button=Configuration' http://$IP/goform/formLogin -O - > /dev/null
echo "Switch to high RPM"
wget -q --load-cookies cookies.txt --post-data 'f_onoff0=1&f_time=15&f_fan=2&power_onoff0=1' http://$IP/goform/SetPowerManagement -O - > /dev/null
echo "Wait a sec..."
sleep 1
echo "Switch off"
wget -q --load-cookies cookies.txt --post-data 'f_onoff0=1&f_time=15&f_fan=0&power_onoff0=1' http://$IP/goform/SetPowerManagement -O - > /dev/null
echo "Finished"
rm -f cookies.txt

Please feel free to use it.

2013. augusztus 26., hétfő

Cheap car GPS tracking with OpenWRT I. The basics


My goal to assembly a reliable GPS tracking system by cheap components with OpenWRT.
I would like to install it into my car. It would record the GPS data (latitude, longitude, speed) in 10 seconds and uploads to a server when it hase WIFI connection.
In my firs run it would scan the available wifi connections frequently and try to connect and upload the recorded data trough non-secure access points.
I would like to make a simple web page where I can follow the car's movement on the google map.


I'm good with it. It is running in pilot mode.

The parts of the project:
  • TP-LINK  TL-MR3020 router running OpenWRT
  • USB Hub to share to multiple the USB connectors
  • Flash drive to store recorded data
  • GPS receiver  to collect data
  • Power bank to work in case the car's battery disconnected
  • PCA with power connectors, fuse, DC-DC converter, etc
  • Webserver listening on the Internet

Here you can see a picture from the current status:

GPS tracking system

First I present the components then I write the "why" in the following posts.


TP-LINK  TL-MR3020 
This is a cheap, portable router which can share your 3G mobile connection as a wifi access point. It has one piece of USB connector, one peace 10/100Mb/s LAN connector and built-in 5DBm WIFI antenna.
An advantage of this device is it can run by USB power (5V, max 500mA).
I've replaced the original firware to OpenWRT by this tutorial.
It was very simply, works out of the box.

USB Hub
It is a cheap one, but can work with external power. The external power is important because the GPS receiver use approx 300mA. Without the extra power the Linux wrote I/O error into the log...
In the first step I use one connector for the USBdrive and another for the GPS receiver. (I plan to complete the system with a 3G stick to send the recorded data to the server immediately)

Flash drive
The router have only 4MB Flash storage, therefore it needs more space to store the recorded data. I've used a flashdrive 2GB space.

GPS receiver
During the period I've collected data to my project I found this and this article. They used the Globalsat BU-353 GPS Reciever. It is the most expensive part of the project, but it works pretty good.

Power bank
I've chosen a 5V and 2600mAh  power bank which can charge from USB connector. Optimally it will not works because I will connect the stuff onto constant current source in the car. One of my expectation is it have to works for a while when  external power is not available. It has a normal USB connector for power input and a micro USB for power output. Both connectors are connected constantly.

PCA
As you can see in the picture I soldered some components onto a project board. The most important component is the DC-DC converter which makes stable 5V  from the car's 12-14V.

Webserver
Basically the system store the collected data on the flash drive but in case it has connection to the internet it synchronize the local data with the remote database.
I would like to make a simple web page where i can follow the car's movement on the google map.
It would inform the visitor about the car's current position, the last "hearthbeat" when the car logged in the page, etc


I will coming soon with the presentation of the components

2013. május 16., csütörtök

Logalyze Installation

I've installed the the Logalyzer on Debian Linux 7.1 64bit  by the followings:

Required packages:
apt-get install -y jbossas4 libapache2-mod-jk libjetty-java  libapache2-mod-jk openjdk-6-jdk

Delete the cache:
apt-get clean

Download the installer:
cd /usr/src
wget http://www.logalyze.com/downloads/finish/2-installer/20-logalyze-full-package-tar-gz-archive -P .
tar xzvf 20-logalyze-full-package-tar-gz-archive
cd logalyze/conf
 
Configure the Logalyze:
 rename 's/\.sample$//' *.sample
 
Not the best but works:
 
echo -e "\n\nexport JAVA_HOME=/usr/lib/jvm/java-6-openjdk-amd64/ \n" >> ../bin/setenv.sh

Start the stuff:
cd /usr/src/logalyze/bin/
./setenv.sh
./startup.sh
cd /usr/src/logalyze/admin/bin/
 ./startup.sh

Shutdown the service:
cd /usr/src/logalyze/bin/
./shutdown.sh
cd ../admin/bin/
./shutdown.sh
Open the browser
http://IPADDRESS:8080
Username: admin
password:logalyze

Graylog2 Installation

I've installed the the Graylog2 on Debian Lnux 6.0.7 by the following script:

#! /bin/bash



#Provided by @mrlesmithjr
#EveryThingShouldBeVirtual.com
# KP mod
# Debian 6.0 Install Script
#
# setup logging
# Logs stderr and stdout to separate files.
exec 2> >(tee "./graylog2/install_graylog2.err")
exec > >(tee "./graylog2/install_graylog2.log")
#
# Apache Settings
#change x.x.x.x to whatever your ip address is of the server you are installing on or let the script auto detect your IP
#which is the default
#SERVERNAME="x.x.x.x"
#SERVERALIAS="x.x.x.x"
#
#
echo "Detecting IP Address"
IPADDY="$(sudo ifconfig | grep -A 1 'eth0' | tail -1 | cut -d ':' -f 2 | cut -d ' ' -f 1)"

SERVERNAME=$IPADDY
SERVERALIAS=$IPADDY

echo "Disabling CD Sources and Updating Apt Packages and Installing Pre-Reqs"
sudo sed -i -e 's|deb cdrom:|# deb cdrom:|' /etc/apt/sources.list
sudo apt-get -qq update
sudo apt-get -y install git curl apache2 libcurl4-openssl-dev apache2-prefork-dev libapr1-dev libcurl4-openssl-dev apache2-prefork-dev libapr1-dev build-essential openssl libreadline6 libreadline6-dev curl git-core zlib1g zlib1g-dev libssl-dev libyaml-dev libsqlite3-dev sqlite3 libxml2-dev libxslt-dev autoconf libc6-dev ncurses-dev automake libtool bison subversion pkg-config python-software-properties

#Debian 6.0 Testing of all-in-one script
#sudo apt-get -y install apt-file
#sudo apt-file update

#Install Oracle Java 6
echo "Installing Oracle Java 6"
sudo apt-get install openjdk-6-jre

echo "Downloading Elasticsearch"

git clone https://github.com/elasticsearch/elasticsearch-servicewrapper.git
sudo chown -R $USER:$USER /opt

cd /opt
git clone https://github.com/elasticsearch/elasticsearch-servicewrapper.git

echo "Downloading Elastic Search, Graylog2-Server and Graylog2-Web-Interface to /opt"

wget https://download.elasticsearch.org/elasticsearch/elasticsearch/elasticsearch-0.20.6.tar.gz
wget http://download.graylog2.org/graylog2-server/graylog2-server-0.11.0.tar.gz
wget http://download.graylog2.org/graylog2-web-interface/graylog2-web-interface-0.11.0.tar.gz

#extract files
echo "Extracting Elasticsearch, Graylog2-Server and Graylog2-Web-Interface to /opt"

for f in *.tar.gz
do
tar zxf "$f"
done

# Create Symbolic Links
echo "Creating SymLinks for elasticsearch and graylog2-server"
ln -s elasticsearch-0.20.6/ elasticsearch
ln -s graylog2-server-0.11.0/ graylog2-server

#Install elasticsearch
echo "Installing elasticsearch"

mv *servicewrapper*/service elasticsearch/bin/
rm -Rf *servicewrapper*
sudo /opt/elasticsearch/bin/service/elasticsearch install
sudo ln -s `readlink -f elasticsearch/bin/service/elasticsearch` /usr/bin/elasticsearch_ctl
sed -i -e 's|# cluster.name: elasticsearch|cluster.name: graylog2|' /opt/elasticsearch/config/elasticsearch.yml
/etc/init.d/elasticsearch start

#Test elasticsearch
# curl -XGET 'http://localhost:9200/_cluster/health?pretty=true'

#Install mongodb
echo "Installing MongoDB"
# KP valtoztatasa
apt-key adv --keyserver keyserver.ubuntu.com --recv 7F0CEB10
echo "deb http://downloads-distro.mongodb.org/repo/debian-sysvinit dist 10gen" | sudo tee /etc/apt/sources.list.d/10gen.list
sudo apt-get -qq update
sudo apt-get -y install mongodb-10gen

#Install graylog2-server
echo "Installing graylog2-server"

cd graylog2-server-0.11.0/
cp /opt/graylog2-server/elasticsearch.yml{.example,}
sudo ln -s /opt/graylog2-server/elasticsearch.yml /etc/graylog2-elasticsearch.yml
cp /opt/graylog2-server/graylog2.conf{.example,}
sudo ln -s /opt/graylog2-server/graylog2.conf /etc/graylog2.conf
sed -i -e 's|mongodb_useauth = true|mongodb_useauth = false|' /opt/graylog2-server/graylog2.conf

echo "Creating /etc/init.d/graylog2-server startup script"

(
cat <<'EOF'
#!/bin/sh
#
# graylog2-server: graylog2 message collector
#
# chkconfig: - 98 02
# description: This daemon listens for syslog and GELF messages and stores them in mongodb
#
CMD=$1
NOHUP=`which nohup`
JAVA_CMD=/usr/bin/java
GRAYLOG2_SERVER_HOME=/opt/graylog2-server
start() {
 echo "Starting graylog2-server ..."
$NOHUP $JAVA_CMD -jar $GRAYLOG2_SERVER_HOME/graylog2-server.jar > /var/log/graylog2.log 2>&1 &
}

stop() {
PID=`cat /tmp/graylog2.pid`
echo "Stopping graylog2-server ($PID) ..."
kill $PID
}

restart() {
echo "Restarting graylog2-server ..."
stop
start
}

case "$CMD" in
start)
start
;;
stop)
stop
;;
restart)
restart
;;
*)
echo "Usage $0 {start|stop|restart}"
RETVAL=1
esac
EOF
) | sudo tee /etc/init.d/graylog2-server

sudo chmod +x /etc/init.d/graylog2-server

#Start graylog2-server on bootup
echo "Making graylog2-server startup on boot"

sudo update-rc.d graylog2-server defaults

#Install graylog2 web interface
echo "Installing graylog2-web-interface"

cd /opt/
ln -s graylog2-web-interface-0.11.0 graylog2-web-interface

#Install Ruby
echo "Installing Ruby"

sudo apt-get -y install libgdbm-dev libffi-dev
\curl -L https://get.rvm.io | bash -s stable
# KP valtoztatasa
source /etc/profile.d/rvm.sh
#source $HOME/.rvm/scripts/rvm
rvm install 1.9.2

#Install Gems
echo "Installing Ruby Gems"

cd /opt/graylog2-web-interface
gem install bundler --no-ri --no-rdoc
bundle install

#Set MongoDB Settings
echo "Configuring MongoDB"

echo "
production:
 host: localhost
 port: 27017
 username: grayloguser
 password: password123
 database: graylog2" | tee /opt/graylog2-web-interface/config/mongoid.yml

#Create MongoDB Users and Set Passwords
echo "Creating MongoDB Users and Passwords"

mongo admin --eval "db.addUser('admin', 'password123')"
mongo admin --eval "db.auth('admin', 'password123')"
mongo graylog2 --eval "db.addUser('grayloguser', 'password123')"
mongo graylog2 --eval "db.auth('grayloguser', 'password123')"

#Test Install
#cd /opt/graylog2-web-interface
#RAILS_ENV=production script/rails server

# Install Apache-passenger
echo "Installing Apache-Passenger Modules"

gem install passenger
passenger-install-apache2-module --auto

#Add passenger code
echo "Adding Apache Passenger modules to /etc/apache2/httpd.conf"

echo "LoadModule passenger_module $HOME/.rvm/gems/ruby-1.9.2-p320/gems/passenger-3.0.18/ext/apache2/mod_passenger.so" | sudo tee -a /etc/apache2/httpd.conf
echo "PassengerRoot $HOME/.rvm/gems/ruby-1.9.2-p320/gems/passenger-3.0.18" | sudo tee -a /etc/apache2/httpd.conf
echo "PassengerRuby $HOME/.rvm/wrappers/ruby-1.9.2-p320/ruby" | sudo tee -a /etc/apache2/httpd.conf

#Restart Apache2
echo "Restarting Apache2"

sudo /etc/init.d/apache2 restart
#If apache fails and complains about unable to load mod_passenger.so check and verify that your passengerroot version matches

#Configure virtualhost
echo "Configuring Apache VirtualHost"

echo "
<VirtualHost *:80>
ServerName ${SERVERNAME}
ServerAlias ${SERVERALIAS}
DocumentRoot /opt/graylog2-web-interface/public

#Allow from all
Options -MultiViews

ErrorLog /var/log/apache2/error.log
LogLevel warn
CustomLog /var/log/apache2/access.log combined
</VirtualHost>" | sudo tee /etc/apache2/sites-available/graylog2

# Enable virtualhost
echo "Enabling Apache VirtualHost Settings"

sudo a2dissite 000-default
sudo a2ensite graylog2
sudo service apache2 reload

# Restart apache
echo "Restarting Apache2"

sudo /etc/init.d/apache2 restart

#Now we need to modify some things to get rsyslog to forward to graylog. this is useful for ESXi syslog format to be correct.
echo "Updating graylog2.conf, rsyslog.conf"

sudo sed -i -e 's|syslog_listen_port = 514|syslog_listen_port = 10514|' /etc/graylog2.conf
sudo sed -i -e 's|mongodb_password = 123|mongodb_password = password123|' /etc/graylog2.conf
sudo sed -i -e 's|#$ModLoad immark|$ModLoad immark|' /etc/rsyslog.conf
sudo sed -i -e 's|#$ModLoad imudp|$ModLoad imudp|' /etc/rsyslog.conf
sudo sed -i -e 's|#$UDPServerRun 514|$UDPServerRun 514|' /etc/rsyslog.conf
sudo sed -i -e 's|#$ModLoad imtcp|$ModLoad imtcp|' /etc/rsyslog.conf
sudo sed -i -e 's|#$InputTCPServerRun 514|$InputTCPServerRun 514|' /etc/rsyslog.conf
sudo sed -i -e 's|*.*;auth,authpriv.none|#*.*;auth,authpriv.none|' /etc/rsyslog.confQ
echo '$template GRAYLOG2,"<%PRI%>1 %timegenerated:::date-rfc3339% %HOSTNAME% %syslogtag% - %APP-NAME%: %msg:::drop-last-lf%\n"' | sudo tee /etc/rsyslog.d/32-graylog2.conf
echo '$ActionForwardDefaultTemplate GRAYLOG2' | sudo tee -a  /etc/rsyslog.d/32-graylog2.conf
echo '$PreserveFQDN on' | sudo tee -a  /etc/rsyslog.d/32-graylog2.conf
echo '*.err;*.crit;*.alert;*.emerg;cron.*;auth,authpriv.* @localhost:10514' | sudo tee -a  /etc/rsyslog.d/32-graylog2.conf

#Restart All Services
echo "Restarting All Services Required for Graylog2 to work"

sudo service elasticsearch restart
sudo service mongodb restart
sudo service graylog2-server restart
sudo service rsyslog restart
sudo service apache2 restart

#All Done
echo "Installation has completed!!"
echo "Browse to IP address of this Graylog2 Server Used for Installation"
echo "IP Address detected from system is $IPADDY"
echo "Browse to http://$IPADDY"
echo "You Entered $SERVERNAME During Install"
echo "Browse to http://$SERVERNAME If Different"
echo "EveryThingShouldBeVirtual.com"
echo "@mrlesmithjr"

# KP valtoztatasa
mkdir -p /opt/graylog2-web-interface/tmp/cache
chmod 777 -R /opt/graylog2-web-interface/tmp/
Start the web-interface with the following:

cd /opt/graylog2-web-interface/ ; /usr/local/rvm/gems/ruby-1.9.2-p320/bin/passenger  start -e production | logger -t graylog-web-interface






The original script is from this site:

http://everythingshouldbevirtual.com/ubuntu-12-04-graylog2-installation