第一部分,apacherizhi/ target=_blank class=infotextkey>apache日志分析
1,查看apache进程:
复制代码 代码示例:
ps aux | grep httpd | grep -v grep | wc -l
2,查看80端口的tcp连接:
复制代码 代码示例:
netstat -tan | grep “ESTABLISHED” | grep “:80″ | wc -l
3,通过日志查看当天ip连接数,过滤重复:
复制代码 代码示例:
cat access_log | grep “20/Oct/2008″ |
linuxjishu/13830.html target=_blank class=infotextkey>awk ‘{print $2}’ | sort | uniq -c | sort -nr
4,当天ip连接数最高的ip都在干些什么(原来是蜘蛛):
复制代码 代码示例:
cat access_log | grep “20/Oct/2008:00″ | grep “122.102.7.212″ | awk ‘{print $8}’ | sort | uniq -c | sort -nr | head -n 10
5,当天访问页面排前10的url:
复制代码 代码示例:
cat access_log | grep “20/Oct/2008:00″ | awk ‘{print $8}’ | sort | uniq -c | sort -nr | head -n 10
6,用tcpdump嗅探80端口的访问看看谁最高
复制代码 代码示例:
tcpdump -i
eth0 -tnn dst port 80 -c 1000 | awk -F”.” ‘{print $1″.”$2″.”$3″.”$4}’ | sort | uniq -c | sort -nr
接着从日志里查看该ip在干嘛:
复制代码 代码示例:
cat access_log | grep 122.102.7.212| awk '{print $1"t"$8}' | sort | uniq -c | sort -nr | less
7,查看某一时间段的ip连接数:
复制代码 代码示例:
grep "2006:0[7-8]" www20060723.log | awk '{print $2}' | sort | uniq -c| sort -nr | wc -l
第二部分,nginx日志分析
nginx日志格式:
log_format main '[$time_local] $remote_addr $status $request_time $body_bytes_sent "$request" "$http_referer"';
access_log /data0/logs/access.log main;
格式如下:
[21/Mar/2011:11:52:15 +0800] 58.60.188.61 200 0.265 28 "POST /event/time HTTP/1.1" "http://host/loupan/207846/feature"
通过日志查看当天ip连接数,过滤重复
复制代码 代码示例:
cat access.log | grep "20/Mar/2011" | awk '{print $3}' | sort | uniq -c | sort -nr
38 112.97.192.16
20 117.136.31.145
19 112.97.192.31
3 61.156.31.20
2 209.213.40.6
1 222.76.85.28
当天访问页面排前10的url:
复制代码 代码示例:
cat access.log | grep "20/Mar/2011" | awk '{print $8}' | sort | uniq -c | sort -nr | head -n 10
找出访问次数最多的10个IP
复制代码 代码示例:
awk '{print $3}' access.log |sort |uniq -c|sort -nr|head
10680 10.0.21.17
1702 10.0.20.167
823 10.0.20.51
504 10.0.20.255
215 58.60.188.61
192 183.17.161.216
38 112.97.192.16
20 117.136.31.145
19 112.97.192.31
6 113.106.88.10
找出某天访问次数最多的10个IP
复制代码 代码示例:
cat /tmp/access.log | grep "20/Mar/2011" |awk '{print $3}'|sort |uniq -c|sort -nr|head
38 112.97.192.16
20 117.136.31.145
19 112.97.192.31
3 61.156.31.20
2 209.213.40.6
1 222.76.85.28
当天ip连接数最高的ip都在干些什么:
复制代码 代码示例:
cat access.log | grep "10.0.21.17" | awk '{print $8}' | sort | uniq -c | sort -nr | head -n 10
224 /test/themes/default/img/logo_index.gif
224 /test/themes/default/img/bg_index_head.jpg
224 /test/themes/default/img/bg_index.gif
219 /test/vc.php
219 /
213 /misc/js/global.js
211 /misc/jsext/popup.ext.js
211 /misc/js/common.js
210 /sladmin/home
197 /misc/js/flib.js
找出访问次数最多的几个分钟
复制代码 代码示例:
awk '{print $1}' access.log | grep "20/Mar/2011" |cut -c 14-18|sort|uniq -c|sort -nr|head
24 16:49
19 16:17
16 16:51
11 16:48
4 16:50
3 16:52
1 20:09
1 20:05
1 20:03
1 19:55
第二部分,linux下nginx日志分析脚本
nginx日志切割脚本、nginx日志分析脚本等。
1,任务计划
复制代码 代码示例:
crontab -l
1 15 * * * /home/dongnan/sh/split.sh >> /home/dongnan/sh/cron.log 2>&
2,nginx 日志
复制代码 代码示例:
ls /var/log/nginx/
20130730-access.log.gz 20130801-access.log.gz 20130803-access.log.gz
20130730-error.log.gz 20130801-error.log.gz 20130803-error.log.gz
20130731-access.log.gz 20130802-access.log.gz access.log
20130731-error.log.gz 20130802-error.log.gz error.log
3,shell 脚本
复制代码 代码示例:
cat split.sh
#!/bin/bash
#script_name:nginx_log.sh
#description:nginx-log deleted/rotate/compress
#last_update:20130725 by zongming
#Nginx
#Signal Action
#TERM, INT Terminate the server immediately
#QUIT Stop the server
#HUP Configuration changes, start new workers, graceful stop of old workers
#USR1 Reopen log files
#USR2 Upgrade the server executable
#WINCH Graceful Stop (parent process advise the children to exit)
#variables
log_dir=/var/log/nginx/
log_date=$(date +"%Y%m%d")
nginx_pid=/var/run/nginx.pid
keep_days=30
#old_log
find "$log_dir" -name "*.log.gz" -type f -mtime +"${keep_days}" -exec rm -rf {} ;
#rename_log
for log_name in `ls "$log_dir" | awk '/.log$/'`;do
if [ -e "${log_dir}${log_date}-${log_name}" ];then
echo "${log_dir}${log_date}-${log_name} Already exists" && continue
else
/bin/mv "${log_dir}${log_name}" "${log_dir}${log_date}-${log_name}"
/bin/gzip "${log_dir}${log_date}-${log_name}"
fi
done
#new_log
/bin/kill -USR1 $(cat $nginx_pid) && /bin/sleep 1
二,nginx日志切割脚本:
vi /root/cutlog.sh
复制代码 代码示例:
#!/bin/bash
I=`ps aux | grep nginx | grep root | grep -v 'grep nginx' | awk '{print $14}'` #查找nginx进程
if [ $I == /usr/local/nginx/sbin/nginx ];then
ACCLOG=`cat /usr/local/nginx/conf/nginx.conf | grep ' access_log' | awk '{print $2}'` #如果nginx进程在,就找到配置文件,读取accesslog路径
ERRLOG=`cat /usr/local/nginx/conf/nginx.conf| grep ^error | awk '{print $2}'| cut -d";" -f1` #错误日志的路径
ls $ACCLOG #查看是否有此文件
if [ $? -eq 0 ];then #如果有
mv $ACCLOG $ACCLOG.`date -d "-1 day" +%F` #重命名当前日志
mv $ERRLOG $ERRLOG.`date -d "-1 day" +%F`
touch $ACCLOG #创建空日志
touch $ERRLOG
chown nginx:root $ACCLOG #修改属主
chown nginx:root $ERRLOG
[ -f /usr/local/nginx/logs/nginx.pid ] && kill -USR1 `cat /usr/local/nginx/logs/nginx.pid` #判断进程,并重新加载(这里的kill -USR1会使nginx将新产生的日志写到刚创建的新日志里面。)
/mnt/logs/checklog.sh $ACCLOG.`date "-1 day" +%F` #这个是日志分析脚本
gzip $ACCLOG.`date -d "-1 day" +%F` #压缩日志
gzip $ERRLOG.`date -d "-1 day" +%F`
mv $ACCLOG.`date -d "-10 day" +%F`.* /mnt/history.nginx.log/ #将10天前的老日志清理到其他地方,(你们如果想删除的可以自己改成删除)
mv $ERRLOG.`date -d "-10 day" +%F`.* /mnt/history.nginx.log/
fi
fi
三,nginx日志分析脚本:
vi /mnt/logs/checklog.sh
复制代码 代码示例:
#!/bin/bash
echo -e "####################`date +%F`" >> /mnt/logs/400.txt
echo -e "####################`date +%F`" >> /mnt/logs/URL.txt
echo -e "####################`date +%F`" >> /mnt/logs/IP.txt
cat $1 | wc -l >> /mnt/logs/IP.txt #分析IP
cat $1 | awk -F'"' '{print $3}' | awk '{print $1}' | sort | uniq -c| sort -rn > /mnt/logs/CODE.txt #分析返回值
cat $1 | awk '{print $1}' | sort | uniq -c| sort -rn | head -n20 >> /mnt/logs/IP.txt
N=`cat /mnt/logs/CODE.txt | wc -l`
for I in $(seq 1 $N)
do
M=`head -n$I /mnt/logs/CODE.txt | tail -n1 | awk '{print $2}'`
if [ $M -ge 400 ]
then
echo "#####FIND $M###############">>/mnt/logs/400.txt #分析错误请求
cat $1 | grep "" $M " | grep -v ' "-" "-" - ' | sort | awk '{print $1 $2 $3 $6 $7 $8 $9 $10 $11 $12 $13 $14 $15 $16 $17 $18 $19 $20 $21}' | sort | uniq -c | sort -rn | head -n5 >> /mnt/logs/400.txt
fi
done
cat $1 | grep -v ' "-" "-" - ' | awk -F'T' '{print $2}' | awk -F'?' '{print $1}' | sort |awk '{print $1}' |
sed 's/(/review/file/download/).*/1/g' | sort | uniq -c | sort -rn | head -n20 >> /mnt/logs/URL.txt