首页
首页
文章目录
  1. k8s节点(minion)+docker
  2. k8s节点(minion)+docker
  • 部署思路:
    1. 第二步: 安装NTP
  • k8s(Kubernetes) 集群部署与安装

    ##Kubernetes 集群的安装和部署
    ####部署kubernetes是一个master/slave架构,现版本kubernetes支持单节点master,多节点slave

    ####k8s server + etcd +docker

    1
    2
    3
    4
    -ip: 192.168.124.9
    -kubernetes: v1.2.0
    -centos: 7.0_3.10.0-327.el7.x86_64
    -docker: 1.10.3

    k8s节点(minion)+docker

    1
    2
    3
    4
    -ip: 192.168.124.10
    -kubernetes: v1.2.0
    -centos: 7.0_3.10.0-327.el7.x86_64
    -docker: 1.10.3

    k8s节点(minion)+docker

    1
    2
    3
    4
    -ip: 192.168.124.11
    -kubernetes: v1.2.0
    -centos: 7.0_3.10.0-327.el7.x86_64
    -docker: 1.10.3

    初步部署使用三台机器,以后扩展节点的时候按照minion节点的配置即可(另外由于etcd是使用Raft算法,所有节点数尽量使用奇数,3、5、7…);

    部署思路:

    主服务器配置—->关闭停用防火墙—->安装ntp服务—->安装etcd—–配置etcd—->安装docker —>配置docker —->安装kubernetes—->配置kubernetes—->安装flannel —->配置flannel—->配置docker—->安装iptables—->配置iptables—->将docker+etcd+kubernets及相关服务器设置开机启动。

    以上为主服务器配置,并将主服务器作为k8s节点使用。

    节点服务器配置—->关闭停用防火墙—->安装flannel+kubernetes+docker—->分别配置各个服务—->将各个服务设置为自启动—->安装iptables并配置—->部署完成。

    ####主服务器配置

    ####第一步: 关闭防火墙

    关闭防火墙,避免和docker的iptables产生冲突

    1
    2
    # systemctl stop firewalld
    # systemctl disable firewalld

    第二步: 安装NTP

    1
    2
    3
    # yum -y install ntp
    # systemctl start ntpd
    # systemctl enable ntpd

    ####第三步:

    1)安装etcd、kubernetes和flannel

    1
    # yum -y install etcd kubernetes flannel

    2)修改etcd的配置文件vim /etc/etcd/etcd.conf

    1
    2
    3
    4
    ETCD_NAME=default
    ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
    ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379"
    ETCD_ADVERTISE_CLIENT_URLS="http://192.168.124.9:2379"

    3 ) 修改kubernetes API server的配置文件vim /etc/kubernetes/apiserver

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    # The address on the local server to listen to.
    #KUBE_API_ADDRESS="--insecure-bind-address=127.0.0.1"
    KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"

    # The port on the local server to listen on.
    KUBE_API_PORT="--port=8080"

    # Port minions listen on
    KUBELET_PORT="--kubelet-port=10250"

    # Comma separated list of nodes in the etcd cluster
    KUBE_ETCD_SERVERS="--etcd-servers=http://192.168.124.9:2379"

    # Address range to use for services
    KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"

    # default admission control policies KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota"
    KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota"

    # Add your own!
    KUBE_API_ARGS=""

    4) 配置 etcd中的网络配置,节点服务器中的flannel service会拉取此配置

    1
    #etcdctl mk /coreos.com/network/config '{"Network":"172.17.0.0/16"}'

    5) 配置flannel, vim /etc/sysconfig/flanneld

    1
    2
    3
    4
    5
    6
    7
    8
    9
    # etcd url location.  Point this to the server where etcd runs
    FLANNEL_ETCD="http://192.168.124.9:2379" #etcd的服务器地址

    # etcd config key. This is the configuration key that flannel queries
    # For address range assignment
    FLANNEL_ETCD_KEY="/coreos.com/network" #名字随便起,当时要与各个节点的名称相同

    # Any additional options that you want to pass
    FLANNEL_OPTIONS="--iface=eno16777736" # eno16777736 为当前主机的eth0

    6) 创建flannel的配置文件vim /etc/sysconfig/flanneld/flannel-config.json,包含如下内容:

    1
    2
    3
    4
    5
    6
    7
    8
    { 
    “Network”: “172.17.0.0/16”,
    “SubnetLen”: 24,
    “Backend”: {
    “Type”: “vxlan”,
    “VNI”: 7890 #唯一值
    }
    }

    “Network”: “172.17.0.0/16”, 要与etcd服务器中的设置的地址一致

    7)将flannel-config.json传到etcd server

    1
    curl -L http://192.168.124.9:2379/v2/keys/coreos.com/network/config -XPUT --data-urlencode value@flannel-config.json

    8)配置docker 服务,目的为将docker0加入到flannel网段中。

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    vim /usr/lib/systemd/system/docker.service
    [Service]
    Type=notify
    NotifyAccess=all
    EnvironmentFile=-/etc/sysconfig/docker
    EnvironmentFile=-/etc/sysconfig/docker-storage
    #EnvironmentFile=-/etc/sysconfig/docker-network #注释掉改行 更改为下面一行
    EnvironmentFile=-/run/flannel/subnet.env
    ......
    ......一下内容未复制,只需修改上面一行

    9)修改kubernetes config文件:vim /etc/kubernetes/config

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    # logging to stderr means we get it in the systemd journal
    KUBE_LOGTOSTDERR="--logtostderr=true"

    # journal message level, 0 is debug
    KUBE_LOG_LEVEL="--v=0"

    # Should this cluster be allowed to run privileged docker containers
    KUBE_ALLOW_PRIV="--allow-privileged=false"

    # How the controller-manager, scheduler, and proxy find the apiserver
    KUBE_MASTER="--master=http://192.168.124.9:8080"

    10)修改kubernetes kubelet配置信息 vim /etc/kubernetes/kubelet

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
        # The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)
    KUBELET_ADDRESS="--address=0.0.0.0"

    # The port for the info server to serve on
    KUBELET_PORT="--port=10250"

    # You may leave this blank to use the actual hostname
    KUBELET_HOSTNAME="--hostname-override=192.168.124.9"

    # location of the api-server
    KUBELET_API_SERVER="--api-servers=http://192.168.124.9:8080"

    # pod infrastructure container
    KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"

    # Add your own!
    KUBELET_ARGS=""
    ```

    剩余的配置文件保存默认不需要修改。

    11)启动并将服务设置为开机启动
    #for DZER0 in etcd kube-apiserver kube-controller-manager kube-scheduler flannel; do 
    systemctl restart $DZER0
    systemctl enable $DZER0
    systemctl status $DZER0 
    done
    1
    到此 其实单主机的kubernetes已经配置好了 可以通过kubetclt -s '192.168.124.9:8080' get nodes 查看
    [root@localhost kubernetes]# kubectl -s '192.168.124.9:8080' get nodes
    NAME             STATUS    AGE
    192.168.124.9    Ready     5s
    1
    2
    3
    4
    5
    6
    7
    8
    9
    ###节点服务器配置(minions):

    a)节点启动
    ```shell
    for DZER0 in kube-proxy kubelet docker flannel ; do
    systemctl restart $DZER0
    systemctl enable $DZER0
    systemctl status $DZER0
    done

    复制 主服务器的1(不需要安装etcd)、5、6、7、8、9、10,a,启动后 通过kubetclt -s ‘192.168.124.9:8080’ get nodes 命令即可查看是否部署完成。

    7-21日更新:

    注意事项:

    1、flannel部署问题

    在部署flannel时需要随时观看路由情况,如果你在部署flannel之前已经安装了docker,需要先删除docker0虚拟网卡

    1
    2
    #ifconfig docker0 down
    #brctl delbr docker0

    执后再执行第8步骤,然后重启docker和falnnel。

    2、在执行第8步找到不到文件,将docker设置成开机启动即可。

    1
    systemctl enable docker

    最后祝大家都能顺利完成部署。 ^_^!

    参考网址:http://udn.yyuap.com/thread-91727-1-1.html

    http://blog.liuts.com/post/247/

    http://www.infoq.com/cn/articles/etcd-interpretation-application-scenario-implement-principle

    https://imspm.com/article/1463470113552

    支持一下
    扫一扫,我会更有动力更新
    • 微信扫一扫
    • 支付宝扫一扫