7月 062018
 

UPDATE wp_options SET option_value = replace( option_value, ‘http://www.zwl520.com’, ‘http://red.zwl520.com’ ) WHERE option_name = ‘home’ OR option_name = ‘siteurl’;

UPDATE wp_posts SET post_content = replace( post_content, ‘http://www.zwl520.com’, ‘http://red.zwl520.com’ ) ;

UPDATE wp_posts SET guid = replace( guid, ‘http://www.zwl520.com’, ‘http://red.zwl520.com’ ) ;

6月 292018
 

Logstash中的 logstash-filter-useragent 插件可以帮助我们过滤出浏览器版本、型号以及系统版本。

使用if语句,只有在 user_agent 字段不为空时才会使用该插件。

target 将过滤出来的 user agent 信息配置到单独的字段中。

source 为必填设置,设置为包含 user agent 的字段。

配置如下:

if [user_agent] != "-" {
          useragent {
          target => "ua"
          source => "http_user_agent"
  }
}

输出内容:

"ua" => {
          "build" => "",
        "os_name" => "Other",
           "name" => "curl",
         "device" => "Other",
          "patch" => "0",
          "minor" => "29",
          "major" => "7",
             "os" => "Other"
    },

完整配置内容

input {
  file {
    path => "/var/log/nginx/access.log"
    type => "nginx"
    start_position => "beginning"
    codec => "json"
  }
}
filter {
      json {
          source => "message"
          remove_field => ["message"]
      }
      geoip {
          source => "remote_addr"
          target => "geoip"
          database => "/opt/GeoLite2-City.mmdb"
          add_field => ["[geoip][coordinates]", "%{[geoip][longitude]}"]
          add_field => ["[geoip][coordinates]", "%{[geoip][latitude]}"]
      }
     if [user_agent] != "-" {
          useragent {
          target => "ua"
          source => "http_user_agent"
  }
      mutate {
          convert => ["[geoip][coordinates]", "float"]
      }
}
}
output {
        elasticsearch {
        hosts => ["192.168.30.208:9200"]
        index => "logstash-nginx-access-%{+YYYY-MM-dd}"
        }
}

注意

在kibana中使用,一定要索引刷新字段,不然不能使用,只能查看

 Posted by at 下午6:21
6月 292018
 

Nginx日志格式

格式如下

log_format json '{"timestamp":"$time_iso8601",'
'"server_hostname":"$hostname",'
'"serverip":"$server_addr",'
'"clientip":"$remote_addr",'
'"x_real_ip":"$http_x_forwarded_for",'
'"user_name":"$remote_user",'
'"body_bytes_sent":"$body_bytes_sent",'
'"bytes_sent":"$bytes_sent",'
'"connection":"$connection",'
'"connection_requests":"$connection_requests",'
'"request_uri":"$scheme://$http_host$request_uri",'
'"http_referer":"$http_referer",'
'"request_time":"$request_time",'
'"upstream_response_time":"$upstream_response_time",'
'"upstream_addr":"$upstream_addr",'
'"upstream_cache_status":"$upstream_cache_status",'
'"http_user_agent":"$http_user_agent",'
'"status": "$status",'
'"request_method": "$request_method",'
'"requesturl": "$request_uri"}';

或者更简单的格式

log_format json '{ "@timestamp": "$time_iso8601", '
         '"remote_addr": "$remote_addr", '
         '"remote_user": "$remote_user", '
         '"body_bytes_sent": "$body_bytes_sent", '
         '"request_time": "$request_time", '
         '"status": "$status", '
         '"request_uri": "$request_uri", '
         '"request_method": "$request_method", '
         '"http_referrer": "$http_referer", '
         '"http_x_forwarded_for": "$http_x_forwarded_for", '
         '"http_user_agent": "$http_user_agent"}';

注意:

修改了字段一定要刷新kibana中索引的字段才能使用

Kibana相关配置

统计UV,去重后的geo.ip, 不是很准确,因为一个公网IP下面有多个内网 ,所以只能统计到1次

统计PV,所有请求总数

访问主机统计

访问ip top10包括城市

访问趋势

访问地域分布

访问页面top10及次数

指定时间内的ip访问趋势

响应代码分布情况

浏览器统计

设备统计

操作系统统计

 Posted by at 下午5:45  Tagged with:
6月 292018
 

 IP地址位置用于确定IP地址的物理位置,因为ELK已经收集了web日志,如果通过ELK分析出来用户来源地址的比例,那么多网站机构的调整都是非常有帮助的。

这个插件与开箱即用的GeoLite2(https://dev.maxmind.com/geoip/geoip2/geolite2/)城市数据库捆绑在一起。 从Maxmind的描述 —-GeoLite2数据库是免费的IP地理位置数据库,可与MaxMind的GeoIP2数据库相媲美,但不太准确”。 有关更多详细信息,请参阅GeoIP Lite2许可证。Maxmind的商业数据库(https://www.maxmind.com/en/geoip2-databases)也支持这个插件。简单来说就是两个数据库一个免费的一个商业的,免费的不如商业的地理位置精准。

       如果您需要使用捆绑的GeoLite2城市以外的数据库,则可以直接从Maxmind网站下载数据库,并使用数据库选项指定其位置。 GeoLite2数据库可以从这里下载(https://dev.maxmind.com/geoip/geoip2/geolite2/)。

       如果您想获得自治系统编号(ASN)信息,您可以使用geolite2 – ASN数据库。

如果GeoIP查找返回经度和纬度,则会创建[geoip][location]字段。该字段以GeoJSON(http://geojson.org/geojson-spec.html)格式存储。此外,还提供了默认的Elasticsearch模板elasticsearch输出(https://www.elastic.co/guide/en/logstash/current/plugins-outputs-elasticsearch.html)映射这[geoip] [location]字段添加到Elasticsearch geo_point。

       由于这个字段是一个geo_point,它仍然是有效的GeoJSON,您将获得Elasticsearch的地理空间查询,构面和过滤器功能以及为所有其他应用程序(如Kibana的地图可视化)提供GeoJSON的灵活性。

      本产品包含由MaxMind创建的GeoLite2数据,可从中获取http://www.maxmind.com. 这个数据库是根据授权知识共享署名 – 相同方式共享4.0国际许可(https://creativecommons.org/licenses/by-sa/4.0/)。

      GeoIP过滤器的版本4.0.0和更高版本使用MaxMind GeoLite2数据库并支持IPv4和IPv6查找。 4.0.0之前的版本使用传统的MaxMind GeoLite数据库,仅支持IPv4查找。 

geoip配置

nginx日志

Nginx日志json格式如下:

log_format json '{ "@timestamp": "$time_iso8601", '
         '"remote_addr": "$remote_addr", '
         '"remote_user": "$remote_user", '
         '"body_bytes_sent": "$body_bytes_sent", '
         '"request_time": "$request_time", '
         '"status": "$status", '
         '"request_uri": "$request_uri", '
         '"request_method": "$request_method", '
         '"http_referrer": "$http_referer", '
         '"http_x_forwarded_for": "$http_x_forwarded_for", '
         '"http_user_agent": "$http_user_agent"}';

下载GeoLite2-City数据库

wget http://geolite.maxmind.com/download/geoip/database/GeoLite2-City.tar.gz
tar xf GeoLite2-City.tar.gz
cp GeoLite2-City_20180605/GeoLite2-City.mmdb /opt/

logstash配置文件

input {
  file {
    path => "/var/log/nginx/access.log"
    type => "nginx"
    start_position => "beginning"
    codec => "json"
  }
}
filter {
      json {
          source => "message"           
          remove_field => ["message"]   
      }
      geoip {
          source => "remote_addr"          #源是定义的nginx json格式日志中访问IP的字段
          target => "geoip"                #生成一个新的字段来保存geoip的字段
          database => "/opt/GeoLite2-City.mmdb"     #geoip数据库路径
          add_field => ["[geoip][coordinates]", "%{[geoip][longitude]}"]
          add_field => ["[geoip][coordinates]", "%{[geoip][latitude]}"]
      }
      mutate {
          convert => ["[geoip][coordinates]", "float"]
      }
}
output {
    stdout {
        codec    => rubydebug
    }
}

测试输出结果

/usr/share/logstash/bin/logstash -f /opt/1.conf

输出如下:

{
          "request_method" => "GET",
                "@version" => "1",
    "http_x_forwarded_for" => "-",
                   "geoip" => {                #这字段为logstash中定义的tags, 下面为geoip生成的字段,包括城市,国家代码等等
         "country_code3" => "CN",
              "latitude" => 22.5333,
         "country_code2" => "CN",
           "coordinates" => [
            [0] 114.1333,
            [1] 22.5333
        ],
             "city_name" => "Shenzhen",
           "region_name" => "Guangdong",
             "longitude" => 114.1333,
              "timezone" => "Asia/Shanghai",
                    "ip" => "113.89.4.236",
          "country_name" => "China",
        "continent_code" => "AS",
           "region_code" => "GD",
              "location" => {
            "lon" => 114.1333,
            "lat" => 22.5333
        }
    },
              "@timestamp" => 2018-06-29T05:46:45.000Z,
         "http_user_agent" => "curl/7.29.0",
                  "status" => "200",
             "remote_addr" => "113.89.4.236",
             "remote_user" => "-",
             "request_uri" => "/",
           "http_referrer" => "-",
                    "host" => "localhost.localdomain",
         "body_bytes_sent" => "3700",
                    "type" => "nginx",
            "request_time" => "0.000",
                    "path" => "/var/log/nginx/access.log"
}

如果是内网地址,geoip解析将会失败,如下

{
          "request_method" => "GET",
                "@version" => "1",
    "http_x_forwarded_for" => "-",
                   "geoip" => {},
              "@timestamp" => 2018-06-29T05:46:45.000Z,
         "http_user_agent" => "curl/7.29.0",
                  "status" => "200",
             "remote_addr" => "192.168.30.208",
             "remote_user" => "-",
             "request_uri" => "/",
           "http_referrer" => "-",
                    "host" => "localhost.localdomain",
         "body_bytes_sent" => "3700",
                    "type" => "nginx",
            "request_time" => "0.000",
                    "path" => "/var/log/nginx/access.log",
                    "tags" => [
        [0] "_geoip_lookup_failure"                   #geoip无法解析内网地址   
    ]
}

如果不想输出无法解析的地址,可以在filter中加入   “if “_geoip_lookup_failure” in [tags] { drop { } }”   如下:

意思是,如果有_geoip_lookup_failure,就删除,这样就可以完全过滤掉内网地址,也不会将这条记录存入索引

filter {
      json {
          source => "message"
          remove_field => ["message"]
      }
      geoip {
          source => "remote_addr"
          target => "geoip"
          database => "/opt/GeoLite2-City.mmdb"
          add_field => ["[geoip][coordinates]", "%{[geoip][longitude]}"]
          add_field => ["[geoip][coordinates]", "%{[geoip][latitude]}"]
      }
      if "_geoip_lookup_failure" in [tags] { drop { } }
}
      mutate {
          convert => ["[geoip][coordinates]", "float"]
      }
}

配置Kibana

首先在索引管理中,刷新字段, 经测试,后面加的字段必须刷新字段,不然无法使用, 在发现里可以看到,但在可视化中无法使用!!

创建可视化图形

配置如下:

注意:

如果报错

报错“No Compatible Fields: The “[nginx-access-]YYYY-MM” index pattern does not contain any of the following field types: geo_point”

索引格式为[nginx-access-]YYYY-MM的日志文件由logstash输出到Elasticsearch;在 elasticsearch 中,所有的数据都有一个类型,什么样的类型,就可以在其上做一些对应类型的特殊操作。geo信息中的location字段是经纬度,我们需要使用经纬度来定位地理位置;在 elasticsearch 中,对于经纬度来说,要想使用 elasticsearch 提供的地理位置查询相关的功能,就需要构造一个结构,并且将其类型属性设置为geo_point,此错误明显是由于我们的geo的location字段类型不是geo_point。

**解决方法:**Elasticsearch 支持给索引预定义设置和 mapping(前提是你用的 elasticsearch 版本支持这个 API,不过估计应该都支持)。其实ES中已经有一个默认预定义的模板,我们只要使用预定的模板即可,默认预定义的模板必须只有匹配 logstash-* 的索引才会应用这个模板,所以需要修改索引名字以logstash开头 如下:

output {
        elasticsearch {
        hosts => ["192.168.30.208:9200"]
        index => "logstash-nginx-access-%{+YYYY-MM-dd}"
        }
}

使用高德地图使地图变成中文

编辑配置文件 vim /etc/kibana/kibana.yml 加入如下地址:

tilemap.url: ‘http://webrd02.is.autonavi.com/appmaptile?lang=zh_cn&size=1&scale=1&style=7&x={x}&y={y}&z={z}’

重启kibana

编辑可视化图形

 Posted by at 下午2:43  Tagged with:
6月 202018
 

前言

X-pack是elasticsearch的一个扩展包,将安全,警告,监视,
图形和报告功能捆绑在一个易于安装的软件包中,
虽然x-pack被设计为一个无缝的工作,但是你可以轻松的启用或者关闭一些功能。目前6.2及以下版本只能使用免费版,
然而免费版的功能相当少。但是已经有大牛将其破解了,
这里只是站在巨人的肩膀上做一些叙述而已。X-pack 的破解基本思路是先安装正常版本,
之后替换破解的jar包来实现,
目前只能破解到白金版,但已经够用了。

X-pack安装

继上一篇文章中安装ELK之后

阿里云Centos7.4快速安装ELK

安装x-pack

安装方式两种

第一种在线安装(因网络问题,这里选择本地安装)

/usr/share/elasticsearch/bin/elasticsearch install x-pack
/usr/share/kibana/bin/kibana-plugin install x-pack

第二种本地安装

下载相对应版本安装包及安装unzip工具

wget https://artifacts.elastic.co/downloads/packs/x-pack/x-pack-6.2.4.zip

yum install -y zip unzip

es安装x-pack

/usr/share/elasticsearch/bin/elasticsearch-plugin install file:///root/x-pack-6.2.4.zip

kibana安装x-pack(时间较久)

安装完成后不要着急访问,首先重置密码

/usr/share/elasticsearch/bin/x-pack/setup-passwords interactive

三个密码都设置成 Test1234@

配置elasticsearch

vim /etc/elasticsearch/elasticsearch.yml

添加如下开启安全选项
xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true
xpack.monitoring.enabled: false

重启es

配置kibana

vim /etc/kibana/kibana.yml

添加es用户密码,不然kibana无法链接es,报错
elasticsearch.username: "elastic"
elasticsearch.password: "Test1234@"

重启kibana

刷新页面,刚刚一共重置了三个用户密码,只有elastic是超级管理员。输入elastic账号密码,

访问es

因配置了用户密码,所以需要加上参数

curl -uelastic:Test1234@ 172.18.241.4:9200

head必须以以下方式访问

http://120.77.40.35:9100/?base_uri=http://120.77.40.35:9200&auth_user=elastic&auth_password=Test1234@

logstash配置

输出到es需要加上账号密码

input {
  file {
    path => "/tmp/access.log"
    type => "nginx"
    start_position => "beginning"
    stat_interval => "2"
  }
}

output {
  elasticsearch {
    hosts => ["172.18.241.4:9200"]
    index => "logstash-nginx-%{+YYYY.MM.dd}"
    user => "elastic"
    password => "Test1234@"
  }

}

破解x-pack

之前x-pack可以免费使用一年,而最新版本中只有一个月的试用期,因为破解比较麻烦,以后再说

 Posted by at 下午4:00  Tagged with:
6月 202018
 

环境

阿里云专有网络 CentOS Linux release 7.4.1708 (Core)  

ELK 6.2.4

软件下载地址

ELK      https://www.elastic.co/cn/products

安装

全部安装方式都采用rpm包安装

安装JDK:
rpm -ivh jdk-8u171-linux-x64.rpm

安装elk:
rpm -ivh elasticsearch-6.2.4.rpm
rpm -ivh logstash-6.2.4.rpm
rpm -ivh kibana-6.2.4-x86_64.rpm

配置elasticsearch

修改系统参数
vim /etc/security/limits.conf
* soft nofile 65536
* hard nofile 131072
* soft nproc 51200
* hard nproc 51200
修改es配置文件
vim /etc/elasticsearch/elasticsearch.yml
cluster.name: ELKtest     #集群名称
node.name: node-1       #节点名称
path.data: /data/es_data   #数据目录
path.logs: /data/es_log     #日志目录
network.host: 172.18.241.4     #监听地址
http.port: 9200               #监听端口
创建相关目录并赋予权限
mkdir /data/es_data/ -p
mkdir /data/es_log/ -p
chown -R elasticsearch.elasticsearch /data/
启动&测试
systemctl restart elasticsearch.service

安装head插件

ElasticSearch-Head 是一个与Elastic集群(Cluster)相交互的Web前台。
它展现ES集群的拓扑结构,并且可以通过它来进行索引(Index)和节点(Node)级别的操作
它提供一组针对集群的查询API,并将结果以json和表格形式返回
它提供一些快捷菜单,用以展现集群的各种状态

5.x以后的版本安装Head插件比较麻烦,不能像2.x的时候一条#elasticsearch/bin/plugin install mobz/elasticsearch-head 

安装nodejs

下载安装包
wget https://nodejs.org/dist/v8.9.1/node-v8.9.1.tar.gz
编译安装
tar xf node-v8.9.1.tar.gz
cd node-v8.9.1/
./configure --prefix=/usr/local/node-8.9.1
make –j 8
make install
ln -s /usr/local/node-8.9.1/ /usr/local/node
配置环境变量
vim /etc/profile
export NODE_HOME=/usr/local/node
export PATH=$PATH:$NODE_HOME/bin

安装head

head最后需要bz2解压
yum install bzip2 -y

使用国内淘宝源安装gruntgrunt是基于Node.js的项目构建工具,可以进行打包压缩、测试、执行等等的工作,head插件就是通过grunt启动

git clone https://github.com/mobz/elasticsearch-head.git
cd elasticsearch-head/
npm install -g grunt  --registry=https://registry.npm.taobao.org
npm install grunt –save    如果报网络错误多试几次即可

npm install -g grunt-cli --registry=https://registry.npm.taobao.org 
npm install  --registry=https://registry.npm.taobao.org

不报错并显示时间即代表安装成功
修改配置文件
vim /root/elasticsearch-head/Gruntfile.js 
connect: {
                        server: {
                                options: {
                                        port: 9100,
                                        hostname: '*',  #增加此行
                                        base: '.',
                                        keepalive: true
修改vim /root/elasticsearch-head/_site/app.js
this.base_uri = this.config.base_uri || this.prefs.get("app-base_uri") || "http://39.108.71.229:9200";    注意,这理是阿里云环境,所以填写的阿里云外网,不然head无法正常连接es,正常情况下填内网地址
修改配置文件vim /etc/elasticsearch/elasticsearch.yml
添加如下:
http.cors.enabled: true
http.cors.allow-origin: '*'
http.cors.allow-headers: Authorization,X-Requested-With,Content-Length,Content-Type
重启elasticsearch
启动head
cd /root/elasticsearch-head/
执行 nohup grunt server & 后台启动

kibana配置

修改如下配置文件vim /etc/kibana/kibana.yml
server.port: 5601       #监听端口
server.host: "172.18.241.4"  #监听地址
server.name: "172.18.241.4"  #servername
elasticsearch.url: http://172.18.241.4:9200  es地址
kibana.index: ".kibana"   索引,如果不启用就不会生成kibane索引
启动服务
systemctl restart kibana.service

Logstash收集系统日志示例

编辑配置文件/etc/logstash/logstash.yml加入如下:
path.config: /etc/logstash/conf.d/*.conf
cd /etc/logstash/conf.d/

vim test.conf
input {
  file {
    path => "/tmp/access.log"
    type => "nginx"
    start_position => "beginning"
    stat_interval => "2"
  }
}

output {
  elasticsearch {
    hosts => ["172.18.241.3:9200"]
    index => "logstash-nginx-%{+YYYY.MM.dd}"
  }

}
启动logstash
systemctl restart logstash.service 

查看head是否有刚刚收集的索引


关联kibana

ES集群相关配置

1.集群名称必须一致;
2.在配置文件中添加discovery字段,如下:
discovery.zen.ping.unicast.hosts:["192.168.88.160", "192.168.88.162","192.168.74.33"]
3.node节点设置
管理节点:node.master: true   为true
数据节点:node.data: true 为true
4.添加集群需要重启所有节点,不要一起重启,一台一台重启,因为es重启就会重新分配分片
5.如果使用x-pack,每个集群节点都要安装并破解,证书保持同一份

 Posted by at 下午2:10  Tagged with:
6月 152018
 

需求

公司有很多API接口,包括网站首页等,可以使用http访问,希望使用zabbix来监控访问的状态

创建模板添加web场景

添加场景

以Order-API为例

添加步骤

步骤可以添加多个,比如有多个服务负载均衡情况

因为需求比较简单, 仅仅监控API是否可用,所以只需要监控其返回值就可以

创建触发器

触发器规则含义

zabbix自带web检测有以下几个表达式

这里使用的表达式

{web.monitor:web.test.fail[Sxmaps Order APIs].last()}<>0 or 

{web.monitor:web.test.rspcode[Sxmaps Order APIs,Sxmaps Order APIs].last()}<>200

第一条表示检测整个阶段,比如有几个step就检测几个。比如order这个API服务有两台机器同时运行,我们添加了2个步骤,分别检测这两个服务,如果全都正常则返回0,如果不是0说明不正常,即告警

第二天表示检测返回状态码,如果不等于200就报警

关联主机

web检测不依赖agent, 也就是说任意一台能访问要检测的web即可

查看监控信息

 Posted by at 下午6:02  Tagged with:
6月 082018
 

准备工作

操作系统  centos7.4

安装依赖软件包

yum install -y lzo lzo-devel openssl openssl-devel pam pam-devel 
yum install -y pkcs11-helper pkcs11-helper-devel

安装OpenVPN服务端

配置好epel源。安装openvpn服务端,及easy-rsa密钥工具, 最新openvpn版本为2.4.6,easy-rsa为3.0,较于2.0配置方法有点不同

yum install openvpn easy-rsa -y

配置OpenVPN服务端

/usr/share/doc/easy-rsa-3.0.3/vars.example

去掉以下注释,根据实际情况修改

set_var EASYRSA                 "$PWD"
set_var EASYRSA_PKI             "$EASYRSA/pki"
set_var EASYRSA_DN              "cn_only"
set_var EASYRSA_REQ_COUNTRY     "CN"
set_var EASYRSA_REQ_PROVINCE    "BEIJING"
set_var EASYRSA_REQ_CITY        "BEIJING"
set_var EASYRSA_REQ_ORG         "OpenVPN CERTIFICATE AUTHORITY"
set_var EASYRSA_REQ_EMAIL       "110@qq.com"
set_var EASYRSA_REQ_OU          "OpenVPN EASY CA"
set_var EASYRSA_KEY_SIZE        2048
set_var EASYRSA_ALGO            rsa
set_var EASYRSA_CA_EXPIRE       7000
set_var EASYRSA_CERT_EXPIRE     3650
set_var EASYRSA_NS_SUPPORT      "no"
set_var EASYRSA_NS_COMMENT      "OpenVPN CERTIFICATE AUTHORITY"
set_var EASYRSA_EXT_DIR "$EASYRSA/x509-types"
set_var EASYRSA_SSL_CONF        "$EASYRSA/openssl-1.0.cnf"
set_var EASYRSA_DIGEST          "sha256"

复制easy-rsa

cp -ra /usr/share/easy-rsa/3.0/* /etc/openvpn/server/
cp -ra /usr/share/easy-rsa/3.0/* /etc/openvpn/client/
cp -a /usr/share/doc/easy-rsa-3.0.3/vars.example /etc/openvpn/server/vars
cp -a /usr/share/doc/easy-rsa-3.0.3/vars.example /etc/openvpn/client/vars

创建证书

服务端证书

为了方便使用无ca密码方式创建

cd /etc/openvpn/server/

初始化pki目录

[root@new server]# ./easyrsa init-pki

Note: using Easy-RSA configuration from: ./vars

init-pki complete; you may now create a CA or requests.
Your newly created PKI dir is: /etc/openvpn/server/pki

创建服务器ca

[root@new server]# ./easyrsa build-ca nopass

Note: using Easy-RSA configuration from: ./vars
Generating a 2048 bit RSA private key
.+++
...................................+++
writing new private key to '/etc/openvpn/server/pki/private/ca.key.BJ03tsfZjN'
-----
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Common Name (eg: your user, host, or server name) [Easy-RSA CA]:

CA creation complete and you may now import and sign cert requests.
Your new CA certificate file for publishing is at:
/etc/openvpn/server/pki/ca.crt

创建服务端key文件

[root@new server]# ./easyrsa gen-req testServer nopass

Note: using Easy-RSA configuration from: ./vars
Generating a 2048 bit RSA private key
................................+++
.....+++
writing new private key to '/etc/openvpn/server/pki/private/testServer.key.Gh5ucd4sf2'
-----
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Common Name (eg: your user, host, or server name) [testServer]:

Keypair and certificate request completed. Your files are:
req: /etc/openvpn/server/pki/reqs/testServer.req
key: /etc/openvpn/server/pki/private/testServer.key

注册服务端CN,生成服务端crt文件

[root@new server]# ./easyrsa sign server testServer

Note: using Easy-RSA configuration from: ./vars


You are about to sign the following certificate.
Please check over the details shown below for accuracy. Note that this request
has not been cryptographically verified. Please be sure it came from a trusted
source or that you have verified the request checksum with the sender.

Request subject, to be signed as a server certificate for 3650 days:

subject=
    commonName                = testServer


Type the word 'yes' to continue, or any other input to abort.
  Confirm request details: yes
Using configuration from /etc/openvpn/server/openssl-1.0.cnf
Check that the request matches the signature
Signature ok
The Subject's Distinguished Name is as follows
commonName            :ASN.1 12:'testServer'
Certificate is to be certified until Jun  5 03:38:01 2028 GMT (3650 days)

Write out database with 1 new entries
Data Base Updated

Certificate created at: /etc/openvpn/server/pki/issued/testServer.crt

生成dh.pem文件

./easyrsa gen-dh

生成ta.key文件

openvpn --genkey --secret ta.key
cp -r ta.key /etc/openvpn/

客户端证书

cd /etc/openvpn/client/

初始化pki目录

[root@new client]# ./easyrsa init-pki

Note: using Easy-RSA configuration from: ./vars

init-pki complete; you may now create a CA or requests.
Your newly created PKI dir is: /etc/openvpn/client/pki

创建客户端key文件

[root@new client]# ./easyrsa gen-req testClient nopass

Note: using Easy-RSA configuration from: ./vars
Generating a 2048 bit RSA private key
..........................................................+++
......................................................+++
writing new private key to '/etc/openvpn/client/pki/private/testClient.key.sHbjZuzAW7'
-----
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Common Name (eg: your user, host, or server name) [testClient]:

Keypair and certificate request completed. Your files are:
req: /etc/openvpn/client/pki/reqs/testClient.req
key: /etc/openvpn/client/pki/private/testClient.key

重要:进入服务端目录,关联客户端req,向服务端注册

cd /etc/openvpn/server/
[root@new server]# ./easyrsa import-req /etc/openvpn/client/pki/reqs/testClient.req testClient

Note: using Easy-RSA configuration from: ./vars

The request has been successfully imported with a short name of: testClient
You may now use this name to perform signing operations on this request.

注册客户端CN,生成客户端key文件,在服务端目录操作

[root@new server]# ./easyrsa sign client testClient

Note: using Easy-RSA configuration from: ./vars


You are about to sign the following certificate.
Please check over the details shown below for accuracy. Note that this request
has not been cryptographically verified. Please be sure it came from a trusted
source or that you have verified the request checksum with the sender.

Request subject, to be signed as a client certificate for 3650 days:

subject=
    commonName                = testClient


Type the word 'yes' to continue, or any other input to abort.
  Confirm request details: yes
Using configuration from /etc/openvpn/server/openssl-1.0.cnf
Check that the request matches the signature
Signature ok
The Subject's Distinguished Name is as follows
commonName            :ASN.1 12:'testClient'
Certificate is to be certified until Jun  5 05:52:38 2028 GMT (3650 days)

Write out database with 1 new entries
Data Base Updated

Certificate created at: /etc/openvpn/server/pki/issued/testClient.crt

服务器配置

打开ip路由转发功能

[root@new server]# echo "net.ipv4.ip_forward = 1" >> /etc/sysctl.conf 
[root@new server]# sysctl -p

编辑配置文件 vim /etc/openvpn/server.conf

port 1194
proto tcp
dev tun
ca /etc/openvpn/server/pki/ca.crt
cert /etc/openvpn/server/pki/issued/testServer.crt
key /etc/openvpn/server/pki/private/testServer.key
dh /etc/openvpn/server/pki/dh.pem
tls-auth /etc/openvpn/ta.key 0
server 10.8.1.0 255.255.255.0
ifconfig-pool-persist ipp.txt
push "redirect-gateway def1 bypass-dhcp"
push "dhcp-option DNS 223.5.5.5"
push "dhcp-option DNS 114.114.114.114"
keepalive 10 120
cipher AES-256-CBC
comp-lzo
max-clients 50
user openvpn
group openvpn
persist-key
persist-tun
status openvpn-status.log
log-append  openvpn.log
verb 3
mute 20

如果需要绑定指定ip,需要配置文件添加 local xxx.xxx.xxx.xxx , 阿里云VPC网络在服务器上默认看不到外网地址,可以绑定内网地址。或者不写 默认监听所有地址 

启动服务

systemctl restart openvpn@server

 Posted by at 下午3:08  Tagged with:
6月 042018
 

不同库单向同步

生产中我们是本地MYSQL与阿里云RDS进行单向及双向同步,不同库不同需要,还有阿里云经典网络中的自建数据库与RDS同步

这里为了贴近生产,也使用的RDS进行测试,只是用的是测试RDS环境。及测试数据库

系统版本 centos7.4

数据库版本 mysql5.6

服务器如下:

node1 192.168.30.213  otter(包括manager, node, zookeeper等所有组件)

node2 192.168.30.212 本地数据库

RDS数据库 rm-wz9mv9gzid9ja28y94o.mysql.rds.aliyuncs.com

同步方向

node2 (本地库)  --- > RDS

操作步骤

本地数据库开启binlog 并且指定模式为ROW 设置server id

server-id = 1
log-bin=mysql-bin
binlog_format=row

RDS默认就开启binlog 而且模式为ROW

导出本地库并导入到RDS

mysqldump -uroot  -B --events --single-transaction zwl > /opt/zwl.sql
mysql -urds_root -hrm-wz9mv9gzid9ja28y94o.mysql.rds.aliyuncs.com -p < /opt/zwl.sql

配置数据源

添加本地数据源

otter不支持latin字符集,但不影响同步

数据源说明

添加rds数据源

源库和目标库的schema需要一致,不然无法执行ddl语句

数据源名字: 定义的名字
类型: 数据库类型
用户名: 数据库用户名
密码:数据库密码
URL: otter链接数据库的URL 如 jdbc:mysql://server_ip:port
编码:数据库编码

配置数据表

添加本地表,也就是源表

添加RDS表也就是目的表

我这里测试同步zwl库的wp_commmentmeta表,如果需要同步所有表可以用正则.*

schmea name: 要同步的数据库
table name: 要同步的表
数据源: 选择来自哪个数据源
验证连接表:验证表是否正确
查询Schema&Table : 验证数据库和表是否正确

配置canal

Otter使用canal开源产品获取数据库增量日志数据,可以把cannal看作是源库的一个伪slave。

 
原理: canal模拟mysql slave的交互协议,伪装自己为mysql slave,向mysql master发送dump协议,mysql master收到dump请求,开始推送binarylog给slave(也就是canal), canal解析binary log对象(原始为byte流)。

在Otter Manager“配置管理-canal配置”页面点击添加:

配置同步

Otter Manager中点击同步管理,添加一个同步任务

创建好后,点击Channel名字,进入Pipeline管理,添加一个Pipeline管理:

Pipeline名字:任意

select机器及Load机器为node工作节点。我们这只有1个node,所以只能选择同一个node

canal名字: 选择刚刚配置好的canal

创建好Pipeline后,点击Pipeline名字,进Pipeline管理,添加映射关系

测试

回到同步管理,点击启用进行测试:

向源库wp_commentmeta表中插入一条数据,测试目标库是否同步

 Posted by at 下午6:09  Tagged with:
6月 022018
 

记录一下配置过程,此项目使用Python基于阿里云SDK调用阿里云API操作,本身对python不是很熟悉,多多少少看的点懂的水平,脚本作者已经写的很完善了.

基本上该有的监控项都有.

阿里云SDK开发指南

https://help.aliyun.com/product/52507.html

项目地址及说明

https://github.com/XWJR-Ops/zabbix-RDS-monitor
https://github.com/XWJR-Ops/zabbix-RDS-monitor/blob/master/README.md

创建阿里云Access Key

首先登陆阿里云创建accesskey,获取AccessKey ID及Secret 和区域ID RegionID

Access Key ID和Access Key Secret是您访问阿里云API的密钥,具有该账户完全的权限,请您妥善保管。

因为阿里云的access key具有完全权限, 包括创建,删除,等等,所以这里使用阿里云RAM账号来进行授权管理,

创建完成后保存好Access Key ID 及Access Key Secret, 

本来想的是授权一个RDS只读权限,后来经过测试发现很多监控项获取不到数据,后台部分监控项会显示403, 所以只能授权RDS管理权限

安装阿里云SDK模块

下载项目到本地

git clone https://github.com/XWJR-Ops/zabbix-RDS-monitor

按项目要求安装对应版本的SDK(项目是用python2.7开发的, 新安装的centos7.4本身自带python2.7.5,所以不需要做太多操作,注意升级pip即可)

[root@node1 ~]# pip install aliyun-python-sdk-core==2.3.5 aliyun-python-sdk-rds datetime

本地获取测试

装好SDK后修改两个脚本中的ID,Secret,RegionID

云控制台修改RDS别名

脚本会收集RDS别名,
不要默认别名
不要使用中文别名(zabbix不识别)

执行discovery脚本, 返回如下表示成功获取到RDS信息

python discovery_rds.py

{"data": [{"{#DBINSTANCEID}": "rm-wz999983674j", "{#DBINSTANCEDESCRIPTION}": "rds-test"}]}

zabbix使用

将两个脚本放置以下目录

/etc/zabbix/script

chmod +x /etc/zabbix/script/*

zabbix_agentd.d中创建conf文件

vim rds.conf

UserParameter=rds.discovery,/usr/bin/python /etc/zabbix/script/discovery_rds.py
UserParameter=check.rds[*],/usr/bin/python /etc/zabbix/script/check_rds.py $1 $2 $3

重启zabbix-agent

导入项目中的模板

发现间隔时间为2分钟

 Posted by at 下午6:21