Tuxites: 2012

Friday 10 February 2012

Configuring AIDE, Enforcing Resource Limits with Response Strategies

Once you have done the previous workarounds and have define your security policies (whatever, complete or incomplete, a book or just a page) on people and on system. Your next concern should be how to respond?

Before proceeding further, let's recall what basics we are including in our policy.


For People & Systems, you have to clearly define activities to manage human 'incidents' or 'events'. You have to define who is incharge of what? ITIL defines it with CI and ownership but this time you have to calculate this in terms of security. Who is incharge of what, who is going to make final decisions on false alarms and when you are going to mirror your breached system disks and notify enforcement agencies !! You have to monitor system activities on regular interval, set up a central log server as we discussed, use logwatch to ease your life (+ other than server farms network, you can setup delay pools on your squid for your users) and the golden rule backup backup backup.

Now asap you get the warning signals like services mysteriously dying or erased tracks or spikes in bandwidth, Don't offer services from the system, boot from verified media to verify breach, analyze logs of remote logger and match if the same things happened with local logs or not ! Check file integrity against read-only backup of your rpm database. make image of your system for forensics, wipe out your box, reinstall, harden and restore from backup. You're up again.

Before processing further, let's go to # and make sure yum.conf is having 

gpgcheck=1

How to - Integrity Checking:
First of all define policy, define a general strategy for implementing integrity checking. You have to install AIDE

# yum install aide                (it's not installed by default)

Remember, the prelinking feature which is enabled by default can interfere and can produce false alarms in your integrity checking, so disable prelinking by writing PRELINKING=no in /etc/sysconfig/prelink and restore your binaries to a non-prelinked state by running

# /usr/sbin/prelink -ua

now customize /etc/aide.conf according to your requirements. Default one is fine for most of us.

Generate a database-

# /usr/sbin/aide --init

(will be written to /var/lib/aide/aide.db.new.gz)

Install it # cp /var/lib/aide/aide.db.new.gz /var/lib/aide/aide.db.gz

Wanna check?

# /usr/sbin/aide --check

now before doing anything else, copy /etc/aide.conf /usr/sbin/aide (yes), and the newly generated database to a read only remote media. On any unexpected output on checks you can campare your local db with remote read only db and files to be sure that everything is fine.

Implement periodic checking -

10 2 * * * root /usr/sbin/aide --check

On other side you can verify package integrity of installed rpms against your RPM db

# rpm -qVa

don't panic, the "c" in second column says that it's a configuration file and these files are meant to be changed. You can exclude config files by -

# rpm -qVa | awk '$2!="c" {print $0}'

Red Hat recommends a matrix for check point, this is extremely helpful always, whether you're going to harden your system or doing the chorus. It ask you to define following access controls and their implementations -

Application, PAM, xinetd, libwrap, SELinux and netfilter

while implementing security with this matrix, remember that every unaccountable resource access is a breach of your security policy.

Let me do the fault analysis with a classic example I faced while working for a client. I faced a frequent ticket from an application team that, their common user is not able to do the passwordless ssh. My team resolved the issue, everything was okay but only for a short time and then boom! the problem was there once again. It happened several times and draw everyone's attention. Since different members of Unix team resolved the issue all the time, the root cause was not identified in 2nd or 3rd occurrence.

A user was creating this by playing with home directory's permissions !! That was identified by 1. Characterizing the problem (frequent changes of permission) 2. Reproducing the problem (by himself, lol) and 3. Analyzing and finding further information. In this case the permissions were resetted and the user was warned not to do this and when he repeated it 3rd time his TL and manager were informed. This finally resolved the issue. 

Other than this, practically you can gather data with strace (# strace -o outputfile command), you can open a terminal to continuously display your log(s) as the scroll (# tail -f logfile), and can add --debug option in application (along with *.debug in your syslog.conf). You can install sysstat RPM which will provide you sar. SAR is configured bydefault to run sa1 in every 10 minutes and to pass options to sadc. Now the sa2 runs at night, it runs sar which reads the sadc database. Both binary db and sar reports (human readable) can be found in /var/log/sa.

Along with that when you're deploying a new system, at that time (or at some point later) you can manage account resource limits with PAM by -

/etc/security/access.conf is configuration file for pam_access.so, that can be used to restrict access to bo from only certain terminals or type of terminals.

/etc/security/time.conf is configuration file for pam_time.so, it can be used to restrict time of day for a user to allow access to box or run certain commands.

/etc/security/limits.conf is conf file for pam_ limits.so and can be used to limit the number of processes a given user may create, the amount of CPU time a process may consume, the default nice value for a process and other limits.

Do experiments with these things and explore the real power of your box.


Reference: Red Hat and NSA guidelines

Thursday 26 January 2012

Security - Logwatch and Auditing How -To

Remote Logging continued ...

When a message is sent to syslog for logging, it is sent with a facility name (e.g. mail, auth ...) and a priority (e.g. debug, emerg ...).

Syslog's config file lines are directives which specifies a set of facility.priority pair and where to log it. Example -

kern.*                               /var/log/kern.log

It specify to log kernel messages of all the priority to /var/log/kern.log file.

guidelines -

1. Store every facility message to it's own log.
2. Restrict info stored in /var/log/messages to only the facilities auth and user
3. Store info of all unused facilities in /var/log/unused.log, it will indicate you in future if an unwanted service runs on your box.
4. For each logfile referred by syslog.conf and rsyslog.conf - 
    # touch LOGFILE
    # chown root:root LOGFILE
    # chmod 600 LOGFILE

Configure logwatch on your central logserver, edit /etc/logwatch/conf/logwatch.conf and make sure you have following parameters -

HostLimit = no             [Tells logwatch to report on all hosts, not just the one it's running]
SplitHosts = yes           [Separate entries by hostname, that is what we want]
MultiEmail = no            [On yes, each host info will be sent in separate email]
Service = -zz-disk_space  [not to run -zz-disk_space, you're on central one, that's why]

here since you're telling your logserver's logwatch not to report on free disk space and on configration of central logwatch, it is recommended to disable logwatch on clients [# rm /etc/cron.daily/0logwatch], in that case one of the workaround may be to configure cron to run df and send the output to syslog, now it will be reported to logserver. :)

Now you have properly configured logwatch, so let's take a look at auditing.
The audit service audits SELinux AVC denials, system logins, account modifications, auth events like sudo by default.

DoD requirements include to audit of Print, Startup, Shutdown, Date & Time of event, UserID that initiated the event, Type of event, Success or Failure, Origin of request (e.g. tty6) and for object introduced to user space or deletion the name of object, at least weekly backup on a different system, new audit logs are started daily, configuration is immutable (-e 2 will require a reboot to change any audit rule) & to ensure that data files have 640 or more restrictive permissions.


To ensure all processes can be audited, enable start auditing at boot time so that the processes can carry an "auditable" flag. do this -

kernel    /vmlinuz-version ro vga=ext root=/dev/vgx/lvy rhgb quiet audit=1

You know where to add this, if you don't, then ask me. :)

by default auditd retains 4 log files of size 5M a piece. That's a problem, so configure your auditd to log all the events of your interest then monitor  log files to see what file size would be required to have tha data for a period you want, and by following best practices you have to do this on a dedicated partition for /var/log/audit.

in /etc/audit/auditd.conf determine STOREMB

max_log_file = STOREMB

how to calculate the size of dedicated partition? simple, It should be larger than max_log_file x num_logs

other parameters to care of are

space_left_action = email
action_mail_acct = root
admin_space_left_action = halt
and
max_log_file_action = keep_logs

configure last parameter as stated if you don't want logrotate to discard the oldest log when logs reach max size.

now it's time to configure auditd rules, we can have this done by copying a prototype as audit.rules -

# cp /usr/share/doc/audit-version/stig.rules /etc/audit/audit.rules

now open audit.rules and comment out lines arch= which are not for your arch.

add following lines to record events that modify date and time information, setting ARCH to b32 or b64 as per your system -

-a always,exit -F arch=ARCH -S adjtimex -S settimeofday -S stime -K time-change
-a always,exit -F arch=ARCH -S clock_settime -K time-change
-w /etc/localtime -p wa -k time-change

now record events that modify user/group information

-w /etc/group -p wa -k identity
-w /etc/passwd -p wa -k identity
-w /etc/gshadow -p wa -k identity
-w /etc/shadow -p wa -k identity
-w /etc/security/opasswd -p wa -k identity

events that modify network environment -

-a exit,always -F arch=ARCH -S sethostname -S setdomainname -k system-locale
-w /etc/issue -p wa -k system-locale
-w /etc/issue.net -p wa -k system-locale
-w /etc/hosts -p wa -k system-locale
-w /etc/sysconfig/network -p wa -k system-locale

events that modify system's MAC -

-w /etc/selinux/ -p wa -k MAC-policy

events to alter logon and logout events -

-w /var/log/faillog -p -wa -k logins
-w /var/log/lastlog -p wa -k logins

events that alter Process and Session initiation info -

-w /var/run/utmp -p wa -k session
-w/var/log/btmp -p wa -k session
-w /var/log/wtmp -p wa -k session

did we forgot to record DAC events? okay, let's capture them too -

-a always,exit -F arch=ARCH -S chmod -S fchmod -S fchmodat -F auid>=500 -F auid!=4294967295 -k perm_mod
-a always,exit -F arch=ARCH -S chown -S fchown -S fchownat -S lchown -F auid>=500 -F auid!=4294967295 -k perm_mod
-a always,exit -F arch=ARCH -S setxattr -S lsetxattr -S fsetxattr -S removexattr -S lremovexattr -S fremovexattr -F auid>=500 -F auid!=4294967295 -k perm_mod

Unsuccessful events / unauthorized access attempts to files -

-a always,exit -F arch=ARCH -S creat -S open -S openat -S truncate -S ftruncate -F exit=-EACCES -F auid>=500 -F auid!=4294967295 -k access
-a always,exit -F arch=ARCH -S creat -S open -S openat -S truncate -S ftruncate -F exit=-EPERM -F auid>=500 -F auid!=4294967295 -k access

events of using privileged commands -

run following command for each local partition ABC to generate rule, one for each suid or sgid program and then add those lines to audit.rules -

# find ABC -xdev \( -perm -4000 -o -perm -2000 \) -type f | awk '{print "-a always,exit -F path=" $1" -F perm=x -Fauid>=500 -F auid!=4294967295 -k privileged" }'


Media exportation events -

-a always,exit -F arch=ARCH -S mount -F auid>=500 -F auid!=4294967295 -k export

File deletion events by uses -

-a always,exit -F arch=ARCH -S unlink -S unlinkat -S rename -S renameat -F auid>=500 -F auid!=4294967295 -k delete

Sysadmin actions & kernel module loading / unloading -

-w /etc/sudoers -p wa -k actions
-w /sbin/insmod -p x -k modules
-w /sbin/rmmod -p x -k modules
-w /sbin/modprobe -p x -k modules
-a always,exit -F arch=ARCH -S init_module -S delete_module -k modules

it's done. Now spend some time on aureport(8) man page so that you can design a short series of audit reporting commands for audit logs on a frequency chosen by you.

To generate daily report of every user login -

# aureport -l -i -ts yesterday -te today

Review all audited activity for unusual behaviour by looking a summary of which audit rules are being triggered -

# aureport --key --summary

On access violations stand out -

# ausearch --key access --raw | aureport --file --summary

What executables are doing -

# ausearch --key access --raw | aureport -x --summary

On access violations on /path/to/file, determine -

# ausearch --key access --file /etc/shadow --raw | aureport --user --summary -i

Device changed to promiscuous mode, processes ending abnormally, login failure limits being reached and other anomalous activities -


# aureport --anamoly

... that's enough on logwatch and auditing. Configure it and stay tuned to my blog.

Reference: NSA guidelines

Visit my other blog space here - http://baaharkibaat.blogspot.com

Thursday 19 January 2012

Security - Remote Logging

... Ok, today we're going to Setup a remote log server.

ssh to your logserver and edit /etc/sysconfig/syslog

It's a well commented file, go to the line

SYSLOGD_OPTIONS="-m 0"

now add -r option so that it can accept remote messages as well.

SYSLOGD_OPTIONS="-r -m 0"

Do a restart of your syslog daemon

# service syslog restart 

Okay, so you have done it, your syslog can listen to remote messages, but does your firewall is aware about it? If not, let's tell the firewall to permit incoming syslog messages -

# iptables -I INPUT -p udp --dport 514 --source -j ACCEPT 

It's done on log server, now go to other box who will send his syslog messages to this log server.

on other box, edit /etc/syslog.conf and according to your requirement add facilty.priority, e.g. I am sending all user messages to our log server - 

user.* @a.b.c.d 

a.b.c.d is your log server IP

On older systems don't forget to restorecon all the files you edit.
Now do

#service syslog restart 

and test your new configuration by generating a message -

# logger -i -t testlog "I am testing remote logging" 

This message will appear in log server's /var/log/messages in following format -
Jan 19 17:55:15 a.b.c.d testlog[6789]: I am testing remote logging

If you're still wondering why do we need to setup a central remote logging server in our network then let me tell you that almost all the successful local or network attacks has a common feature. Attacker always try to erase his foot prints, he tries to clear evidence of his work. Your log server will be a configuration that collects evidence, in addition well configured auditing along with logging will show you misconfigurations and vulnerabilites.



We'll discuss response strategies and fault analysis along with logwatch in next blog.
 

Visit my other blog space here - http://baaharkibaat.blogspot.com

Tuesday 17 January 2012

Infrastructure, Security and You

Infrastructure is comprised of roles roles and roles, how? let's see -
for computing infra - system
for system infra - processess
for processes infra - accounts

These roles either serve or request. For example accounts can serve or request in processing infrastructure.

Remember there is nothing personal about a node / system / workstation / server, it is data that is personal not the system. That is why we always deploy a policy to secure our environment and infrastructure.


There is a need of policy, does not matter complete or not, or even of just one para or a single page, but you should have a policy that guides you about the security, because once you start making a policy, you get a vision and with time your policy comes close to no room for error.


Ok, let's begin and we'll try to see the Security in theory / principle and in practice.


In principle security domains can be -


Physical
Local
Remote
Personnel


Other than that if you read books written by geeks or read the theory you'll find life cycles of security domains and stages, but here I'll try to focus to stay glued to the view of OS. So we can say that if you keep your installer selection to the default, you're establishing a known state of the Physical Domain, if your system initialization in the ready state is close to the default state, this approach to establish a known local domain, you can declare your hardening policy that will define it and will guide you in new deployments.


Similarly, keeping networking configuration narrow and precise is approach to establish a known remote domain.


In practice?


Your design makes the system to serves available resources.
Your policy help you to enforce your system to preserves available resources.
& 4 Qs -
1. Do we need to host this?
2. Does this node or CI need to know and access this?
3. Is system behaviour is normal?
4. Have you applied security updates?


The 3rd question of the 4Q needs proactive monitoring of resources for performance, once you generate a pattern of performance, you can easily figure out not only drops in performance but vulnerabilities and compromises too. Always use sar and logwatch and configure your syslog to log all the priority upper than info to log on a remote host for your entire Infrastructure. 


In a linux system you'll have following facilities -

authpriv
cron
daemon
kern
local [0-7]
lpr
mail
news
syslog
user

and you'll have following priorities for the above facilities -

debug
info
notice
warning
err
crit
alert
emerg

Analyse your requirement, check your policies and decide which facilites and which priorities you are going to log on remote log server.
Continued in next blog - how to accomplish this ^, how to decide upon response strategies and fault analysis.