<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>DevOps技术分享 &#187; Kubernetes</title>
	<atom:link href="http://www.showerlee.com/archives/category/ci-cd/kubernetes/feed" rel="self" type="application/rss+xml" />
	<link>http://www.showerlee.com</link>
	<description>与你共同学习运维开发</description>
	<lastBuildDate>Mon, 19 Oct 2020 05:51:41 +0000</lastBuildDate>
	<language>zh-CN</language>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=3.6</generator>
		<item>
		<title>Kubernetes之python client连接k8s API cluster</title>
		<link>http://www.showerlee.com/archives/2804</link>
		<comments>http://www.showerlee.com/archives/2804#comments</comments>
		<pubDate>Tue, 23 Oct 2018 14:01:25 +0000</pubDate>
		<dc:creator>showerlee</dc:creator>
				<category><![CDATA[DevTools]]></category>
		<category><![CDATA[Kubernetes]]></category>
		<category><![CDATA[其他]]></category>
		<category><![CDATA[k8s]]></category>
		<category><![CDATA[PYTHON]]></category>

		<guid isPermaLink="false">http://www.showerlee.com/?p=2804</guid>
		<description><![CDATA[大家在平时使用k8s可以说用到的最多的命令应该就是kubectl, 这个命令默认会在master上安装并与本地 [&#8230;]]]></description>
				<content:encoded><![CDATA[<p>
	大家在平时使用<a href="http://www.showerlee.com/archives/tag/k8s" title="查看k8s中的全部文章" class="tag_link">k8s</a>可以说用到的最多的命令应该就是kubectl, 这个命令默认会在master上安装并与本地的<a href="http://www.showerlee.com/archives/tag/k8s" title="查看k8s中的全部文章" class="tag_link">k8s</a> API cluster绑定token认证, 实现日常<a href="http://www.showerlee.com/archives/tag/k8s" title="查看k8s中的全部文章" class="tag_link">k8s</a>的数据交互. 不过问题在于, 如果我们需要远程调用<a href="http://www.showerlee.com/archives/tag/k8s" title="查看k8s中的全部文章" class="tag_link">k8s</a> API或者需要实现<a href="http://www.showerlee.com/archives/tag/k8s" title="查看k8s中的全部文章" class="tag_link">k8s</a>自动化集成, 仅靠每次远程ssh集成使用kubectl命令这种偷懒的办法是远远不够的.
</p>
<p>
	这里<a href="http://www.showerlee.com/archives/tag/k8s" title="查看k8s中的全部文章" class="tag_link">k8s</a>官方给我们提供了两种比较主流的连接k8s API cluster的语言, 一种是GO, 另外一种就是我们DevOps比较主流的Python.
</p>
<p>
	今天这里我将给大家介绍如何使用python client扩展包去编写脚本并远程连接k8s API cluster.
</p>
<p>
	这样我们平时除了可以远程使用ssh+kubectl去与kubernetes进行命令行数据交互外, 同样可以在远程不依赖命令行直接连接k8s cluster API cluster, 与k8s进行python API交互, 方便我们后期利用python去对k8s进行自动化集成与二次开发.
</p>
<p>
	
</p>
<p>
	这里我们不多说, 直接进入我们的runbook配置环节:
</p>
<p>
	
</p>
<p>
	
</p>
<p>
	<span style="font-size:16px;color:#337FE5;"><strong>一.部署python client环境连接我们的k8s API cluster.</strong></span>
</p>
<p style="font-family:Helvetica;font-size:13px;vertical-align:baseline;color:#111111;background-color:#FFFFFF;">
	<span style="color:#337FE5;">1.安装python3.6.5源及依赖包</span>
</p>
<p style="font-family:Helvetica;font-size:13px;vertical-align:baseline;color:#111111;background-color:#FFFFFF;">
	<span style="vertical-align:baseline;line-height:1.5;"># yum install epel-release -y</span>
</p>
<p style="font-family:Helvetica;font-size:13px;vertical-align:baseline;color:#111111;background-color:#FFFFFF;">
	# yum groupinstall "Development tools" -y
</p>
<p style="font-family:Helvetica;font-size:13px;vertical-align:baseline;color:#111111;background-color:#FFFFFF;">
	# yum install zlib-devel bzip2-devel openssl-devel ncurses-devel zx-devel sqlite-devel readline-devel tk-devel gdbm-devel db4-devel libpcap-devel -y
</p>
<p style="font-family:Helvetica;font-size:13px;vertical-align:baseline;color:#111111;background-color:#FFFFFF;">
	
</p>
<p style="font-family:Helvetica;font-size:13px;vertical-align:baseline;color:#111111;background-color:#FFFFFF;">
	<span style="color:#337FE5;">2.编译安装python3.6.5以及pip package manager</span>
</p>
<p style="font-family:Helvetica;font-size:13px;vertical-align:baseline;color:#111111;background-color:#FFFFFF;">
	# wget <a href="https://www.python.org/ftp/python/3.6.5/Python-3.6.5.tar.xz" rel="nofollow">https://www.python.org/ftp/python/3.6.5/Python-3.6.5.tar.xz</a> --no-check-certificate
</p>
<p style="font-family:Helvetica;font-size:13px;vertical-align:baseline;color:#111111;background-color:#FFFFFF;">
	# tar xf Python-3.6.5.tar.xz
</p>
<p style="font-family:Helvetica;font-size:13px;vertical-align:baseline;color:#111111;background-color:#FFFFFF;">
	# cd Python-3.6.5
</p>
<p style="font-family:Helvetica;font-size:13px;vertical-align:baseline;color:#111111;background-color:#FFFFFF;">
	# ./configure --prefix=/usr/local --with-ensurepip=install --enable-shared LDFLAGS="-Wl,-rpath /usr/local/lib"
</p>
<p style="font-family:Helvetica;font-size:13px;vertical-align:baseline;color:#111111;background-color:#FFFFFF;">
	# make &amp;&amp; make altinstall
</p>
<p style="font-family:Helvetica;font-size:13px;vertical-align:baseline;color:#111111;background-color:#FFFFFF;">
	
</p>
<p style="font-family:Helvetica;font-size:13px;vertical-align:baseline;color:#111111;background-color:#FFFFFF;">
	<span style="color:#337FE5;">3.安装virtualenv</span>
</p>
<p style="font-family:Helvetica;font-size:13px;vertical-align:baseline;color:#111111;background-color:#FFFFFF;">
	#&nbsp;pip3.6 install --upgrade pip
</p>
<p style="font-family:Helvetica;font-size:13px;vertical-align:baseline;color:#111111;background-color:#FFFFFF;">
	#&nbsp;pip3.6 install virtualenv
</p>
<p style="font-family:Helvetica;font-size:13px;vertical-align:baseline;color:#111111;background-color:#FFFFFF;">
	
</p>
<p style="font-family:Helvetica;font-size:13px;vertical-align:baseline;color:#111111;background-color:#FFFFFF;">
	<span style="color:#337FE5;">4.配置加载virtualenv环境</span>
</p>
<p style="font-family:Helvetica;font-size:13px;vertical-align:baseline;color:#111111;background-color:#FFFFFF;">
	# <span style="color:#111111;font-family:Helvetica;font-size:13px;background-color:#FFFFFF;">virtualenv -p /usr/local/bin/python3 .py3env</span>
</p>
<p style="font-family:Helvetica;font-size:13px;vertical-align:baseline;color:#111111;background-color:#FFFFFF;">
	<span style="color:#111111;font-family:Helvetica;font-size:13px;background-color:#FFFFFF;">#&nbsp;source .py3env/bin/activate</span>
</p>
<p style="font-family:Helvetica;font-size:13px;vertical-align:baseline;color:#111111;background-color:#FFFFFF;">
	
</p>
<p><span style="color:#111111;font-family:Helvetica;font-size:13px;background-color:#FFFFFF;"><span style="color:#337FE5;font-family:Helvetica;font-size:13px;background-color:#FFFFFF;">5.安装kubernetes python client扩展包</span></span> </p>
<p>
	<span style="color:#000000;font-family:Helvetica;font-size:13px;background-color:#FFFFFF;"><span style="color:#000000;font-family:Helvetica;font-size:13px;background-color:#FFFFFF;"># pip install kubernetes</span></span>
</p>
<p>
	<span style="color:#000000;font-family:Helvetica;font-size:13px;background-color:#FFFFFF;"><span style="color:#000000;font-family:Helvetica;font-size:13px;background-color:#FFFFFF;"><br />
</span></span>
</p>
<p>
	<span style="color:#000000;font-family:Helvetica;font-size:13px;background-color:#FFFFFF;"><span style="color:#000000;font-family:Helvetica;font-size:13px;background-color:#FFFFFF;"><strong><span style="font-size:16px;color:#337FE5;">二.在k8s master获取API cluster URL与token</span></strong></span></span>
</p>
<p>
	<span style="color:#000000;">我们需要在创建python脚本前, 在k8s主机上创建一个admin权限的service account, 并获取其token 用来作为我们的脚本凭证.</span>
</p>
<p>
	<span style="color:#337FE5;">1.抓取Cluster URL地址</span>
</p>
<p>
	# APISERVER=$(kubectl config view --minify | grep server | cut -f 2- -d ":" | tr -d " ")
</p>
<p>
	# echo $APISERVER
</p>
<p>
	
</p>
<p>
	<span style="color:#337FE5;">2.创建k8s admin-user</span>
</p>
<p>
	<span style="color:#000000;"># mkdir -p /kube/role</span>
</p>
<p>
	<span style="color:#000000;"># cd /kube/role</span>
</p>
<p>
	在kube-system下创建admin-user
</p>
<p>
	<span style="color:#000000;"># vi&nbsp;<span>CreateServiceAccount.yaml</span> </span>
</p>
<pre class="prettyprint">apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kube-system</pre>
<p>
	给admin-user赋予admin权限
</p>
<p># vi CreateServiceAccount.yaml</p>
<pre class="prettyprint">apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kube-system</pre>
<p>#&nbsp;kubectl create -f CreateServiceAccount.yaml<br />
#&nbsp;kubectl create -f RoleBinding.yaml</p>
<p><span style="color:#337FE5;">3.获取admin-user token</span> </p>
<p>
	# Token=$(kubectl describe secret $(kubectl get secret -n kube-system | grep ^admin-user | awk '{print $1}') -n kube-system | grep -E '^token'| awk '{print $2}')
</p>
<p>
	# echo $Token
</p>
<p>
	<span style="color:#E53333;">最后将token与APISERVER地址返回内容复制到python client主机上, 供脚本使用.</span>
</p>
<p>
	
</p>
<p>
	<span style="font-size:16px;"><strong><span style="color:#337FE5;">三</span></strong><span style="color:#337FE5;"><strong>. 在python client主机上编写脚本</strong></span></span>
</p>
<p>
	<span style="color:#337FE5;">1. 创建目录结构</span>
</p>
<p>
	# mkdir -p /kube/auth
</p>
<p>
	# cd /kube/auth
</p>
<p>
	# touch token.txt
</p>
<p><span style="color:#337FE5;">2. 创建token.txt文件并将k8s主机上获取的Token字符串复制到该文件</span> </p>
<p>
	<span style="color:#E53333;">这里我们获取的token会引入到我们的脚本下, 作为bearer authorization的api key与远程k8s API建立认证连接.</span>
</p>
<p>
	
</p>
<p>
	<span style="color:#337FE5;">3. 编写python client脚本</span>
</p>
<p>
	# vi k8s_auth.py
</p>
<pre class="prettyprint lang-py">#!/usr/bin/env python


from kubernetes import client, config


def main():
    # Define the barer token we are going to use to authenticate.
    # See here to create the token:
    # <a href="https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster/" rel="nofollow">https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster/</a>

    with open('token.txt', 'r') as file:
        Token = file.read().strip('\n')

    APISERVER = 'https://10.110.16.14:6443'

    # Create a configuration object
    configuration = client.Configuration()

    # Specify the endpoint of your Kube cluster
    configuration.host = APISERVER

    # Security part.
    # In this simple example we are not going to verify the SSL certificate of
    # the remote cluster (for simplicity reason)
    configuration.verify_ssl = False

    # Nevertheless if you want to do it you can with these 2 parameters
    # configuration.verify_ssl=True
    # ssl_ca_cert is the filepath to the file that contains the certificate.
    # configuration.ssl_ca_cert="certificate"
    configuration.api_key = {"authorization": "Bearer " + Token}

    # configuration.api_key["authorization"] = "bearer " + Token
    # configuration.api_key_prefix['authorization'] = 'Bearer'
    # configuration.ssl_ca_cert = 'ca.crt'
    # Create a ApiClient with our config
    client.Configuration.set_default(configuration)

    # Do calls
    v1 = client.CoreV1Api()
    print("Listing pods with their IPs:")
    ret = v1.list_pod_for_all_namespaces(watch=False)
    for i in ret.items:
        print("%s\t%s\t%s" %
              (i.status.pod_ip, i.metadata.namespace, i.metadata.name))


if __name__ == '__main__':
    main()</pre>
<p>
	<span style="color:#E53333;">这个脚本通过抓取同目录的k8s token字符串, 以及我们之前获取到的api server地址,利用python kubernetes扩展模块连接我们的远程API, 最终实现打印我们k8s所有namespace下的所有pods.</span>
</p>
<p>
	<span style="color:#E53333;">实际效果与kubectl get pods --all-namespaces类似.</span>
</p>
<p>
	
</p>
<p>
	<span style="color:#337FE5;">4. 修改权限并执行</span>
</p>
<p>
	# chmod 755 k8s_auth.py
</p>
<p>
	# ./k8s_auth.py
</p>
<pre class="prettyprint lang-bsh">Listing pods with their IPs:
...
10.244.1.3	default	db
10.244.1.2	default	kubernetes-downwardapi-volume-example
10.244.1.245	default	nginx1-7-deployment-667547b6d8-c5nsr
10.244.1.239	default	nginx1-7-deployment-667547b6d8-mjxq4
10.244.1.247	default	nginx1-8-deployment-d46768cf9-888v8
10.244.1.242	default	nginx1-8-deployment-d46768cf9-fglq2
10.110.16.14	kube-system	etcd-kube-master
10.110.16.14	kube-system	kube-apiserver-kube-master
10.110.16.14	kube-system	kube-controller-manager-kube-master
10.244.1.241	kube-system	kube-dns-6f4fd4bdf-qsxrj
10.110.16.14	kube-system	kube-flannel-ds-5sdlg
10.110.16.15	kube-system	kube-flannel-ds-qctv4
10.110.16.14	kube-system	kube-proxy-5nscq
10.110.16.15	kube-system	kube-proxy-6g7jc
10.110.16.14	kube-system	kube-scheduler-kube-master
10.244.1.246	kube-system	kubernetes-dashboard-58f5cb49c-6dxgn
10.244.1.240	kube-system	tiller-deploy-cbb85d8dc-f6rsv
10.110.16.15	kube-system	traefik-ingress-lb-765c44656f-fkzxb
10.244.1.243	spinnaker	kubelive-create-bucket-wxlv4
10.244.1.244	spinnaker	kubelive-delete-jobs-zhn75</pre>
<p>这样我们就成功的编写完成python client脚本连接k8s API cluster, 将所有namespace下的pod的基本信息打印出来.</p>
<p>
	
</p>
<p>
	
</p>
<p>
	Finished...</p>
<div>声明: 本文采用 <a rel="external" href="http://creativecommons.org/licenses/by-nc-sa/3.0/deed.zh" title="署名-非商业性使用-相同方式共享 3.0 Unported">CC BY-NC-SA 3.0</a> 协议进行授权</div><div>转载请注明来源：<a rel="external" title="DevOps技术分享" href="http://www.showerlee.com/archives/2804">DevOps技术分享</a></div><div>本文链接地址：<a rel="external" title="Kubernetes之python client连接k8s API cluster" href="http://www.showerlee.com/archives/2804">http://www.showerlee.com/archives/2804</a></div>]]></content:encoded>
			<wfw:commentRss>http://www.showerlee.com/archives/2804/feed</wfw:commentRss>
		<slash:comments>5</slash:comments>
		</item>
		<item>
		<title>Kubernetes之Ingress+Traefik</title>
		<link>http://www.showerlee.com/archives/2701</link>
		<comments>http://www.showerlee.com/archives/2701#comments</comments>
		<pubDate>Sat, 08 Sep 2018 09:32:15 +0000</pubDate>
		<dc:creator>showerlee</dc:creator>
				<category><![CDATA[DevTools]]></category>
		<category><![CDATA[Docker]]></category>
		<category><![CDATA[Kubernetes]]></category>

		<guid isPermaLink="false">http://www.showerlee.com/?p=2701</guid>
		<description><![CDATA[今天是一个值得庆祝的日子, 终于把研究了半年未果的k8s ingress反向代理通过traefik这个工具给试 [&#8230;]]]></description>
				<content:encoded><![CDATA[<p>
	今天是一个值得庆祝的日子, 终于把研究了半年未果的k8s ingress反向代理通过traefik这个工具给试验出来了, 这里感谢我的甜宝宝对我的支持, 因为有你, 我才有动力继续前行.
</p>
<p>
	<span style="color:#337FE5;"><span style="color:#337FE5;"><a href="https://traefik.io/" target="_blank"><span style="color:#337FE5;">traefik</span></a></span><span style="color:#337FE5;">&nbsp;</span></span>是一款开源的反向代理与负载均衡工具。它最大的优点是能够与常见的微服务系统直接整合，可以实现自动化动态配置。目前支持 Docker, Swarm, Mesos/Marathon, Mesos, Kubernetes, Consul, Etcd, Zookeeper, BoltDB, Rest API 等等后端模型。
</p>
<p>
	由于微服务架构以及 Docker 技术和 kubernetes 编排工具最近几年才开始逐渐流行，所以一开始的反向代理服务器比如 nginx、apache 并未提供其支持，所以才会出现 Ingress Controller 来做 kubernetes 和前端负载均衡器如 nginx 之间做衔接；即 Ingress Controller 的存在就是为了能跟 kubernetes 交互，又能写 nginx 配置，还能 reload 它，这是一种折中方案；而 traefik 天生就是提供了对 kubernetes 的支持，也就是说 traefik 本身就能跟 kubernetes API 交互，感知后端变化，因此可以得知:&nbsp; traefik 完全已经取代了 Ingress Controller 作为与k8s交互的代理，在此给大家这里整体架构如下:
</p>
<p>
	<a href="http://www.showerlee.com/?attachment_id=2703"><img onerror="javascript:this.src='http://www.showerlee.com/wp-content/themes/BYMT/images/images_error.jpg'" src="http://www.showerlee.com/wp-content/uploads/2018/09/traefik.png" alt="traefik" width="812" height="389" class="alignnone size-full wp-image-2703" /></a>
</p>
<p>
	<span style="color:#000000;"></span>
</p>
<p>
	为什么选择 traefik？
</p>
<p>
	
</p>
<p>
	Golang 编写，单文件部署，与系统无关，同时也提供小尺寸 Docker 镜像。<br />
支持 Docker/Etcd 后端，天然连接我们的微服务集群。<br />
内置 Web UI，管理相对方便。<br />
自动配置 ACME(Let’s Encrypt) 证书功能。<br />
性能尚可，我们也没有到压榨 LB 性能的阶段，易用性更重要。
</p>
<p>
	<br />
除了这些以外，traefik 还有以下特点：<br />
Restful API 支持。<br />
支持后端健康状态检查，根据状态自动配置。<br />
支持动态加载配置文件和 graceful 重启。<br />
支持 WebSocket 和 HTTP/2。
</p>
<p>
	
</p>
<p>
	<span style="color:#E53333;">目前网上很多资料虽然能够成功在k8s下搭建ingress+traefik实现cluster内网的代理功能, 但如何将这个反向代理IP暴露在我们外网, 也就是不依赖于AWS等公有云, 在我们on-prem(本地k8s网络环境)通过定义hostnetwork去实现将我们的node ip直接作为我们在cluster下创建的反向代理ip,&nbsp; 将是我们本次需要给大家介绍的内容.</span>
</p>
<p>
	我们接着上一篇<a href="http://www.showerlee.com/archives/2200" target="_blank">Kubernates1.9+Docker17离线安装部署</a>, 给大家展开如何实现k8s通过<span>使用Ingress+Traefik实现集群反向代理功能.</span>
</p>
<p>
	
</p>
<p>
	<span style="color:#337FE5;font-size:14px;">1.获取Traefik支持K8s仓库脚本</span>
</p>
<p>
	# cd /kube/
</p>
<p>
	#&nbsp;git clone <a href="https://github.com/containous/traefik.git" rel="nofollow">https://github.com/containous/traefik.git</a>
</p>
<p>
	# cd&nbsp;traefik/examples/k8s/
</p>
<p>
	# ll
</p>
<pre class="prettyprint lang-bsh">total 36
-rw-r--r-- 1 root root  140 Aug 26 04:23 cheese-default-ingress.yaml
-rw-r--r-- 1 root root 1805 Aug 26 04:23 cheese-deployments.yaml
-rw-r--r-- 1 root root  519 Aug 26 04:23 cheese-ingress.yaml
-rw-r--r-- 1 root root  509 Aug 26 04:23 cheese-services.yaml
-rw-r--r-- 1 root root  504 Aug 26 04:23 cheeses-ingress.yaml
-rw-r--r-- 1 root root 1144 Aug 26 06:40 traefik-deployment.yaml
-rw-r--r-- 1 root root 1206 Aug 26 04:23 traefik-ds.yaml
-rw-r--r-- 1 root root  694 Aug 26 04:23 traefik-rbac.yaml
-rw-r--r-- 1 root root  466 Aug 26 04:40 ui.yaml</pre>
<p>
	这个目录下就是示例 Traefik 启动所需要的 yaml 文件，Traefik 提供了适配各个类型服务编排的部署方式，kubernetes 启动方式支持 Deployment 和 DaemonSet，二选一都可以, 这里笔者选择Deployment
</p>
<p>
	<span style="color:#E53333;"><br />
</span>
</p>
<p>
	<span style="color:#337FE5;font-size:14px;">2. Traefik脚本配置</span>
</p>
<p>
	<span style="color:#E53333;">还记得我之前提到的hostnetwork吗?&nbsp;</span>
</p>
<p>
	<span style="color:#E53333;">这里我们需要在部署之前将我们的traefik-deployment.yaml下添加一行&nbsp;</span><strong><span style="color:#E53333;">hostNetwork: true</span></strong>
</p>
<p>
	<span style="color:#E53333;">这样子就能保证我们的Traefik反向代理的会将自己</span><span style="color:#E53333;">Endpoints IP配置成我们的NodeIP, 供我们与K8S同一网段的设备访问.</span>
</p>
<p>
	<span style="color:#E53333;"><span style="color:#000000;">我们可以这么配置:</span><br />
</span>
</p>
<p>
	<span style="color:#E53333;"><span style="color:#000000;"># vi&nbsp;traefik-deployment.yaml</span></span>
</p>
<p>
	<span style="color:#E53333;"><span style="color:#000000;"> </span></span>
</p>
<pre class="prettyprint">---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: traefik-ingress-controller
  namespace: kube-system
---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
  name: traefik-ingress-controller
  namespace: kube-system
  labels:
    k8s-app: traefik-ingress-lb
spec:
  replicas: 1
  selector:
    matchLabels:
      k8s-app: traefik-ingress-lb
  template:
    metadata:
      labels:
        k8s-app: traefik-ingress-lb
        name: traefik-ingress-lb
    spec:
      hostNetwork: true
      serviceAccountName: traefik-ingress-controller
      terminationGracePeriodSeconds: 60
      containers:
      - image: traefik
        name: traefik-ingress-lb
        ports:
        - name: http
          containerPort: 80
        - name: admin
          containerPort: 8080
        args:
        - --api
        - --kubernetes
        - --logLevel=INFO
---
kind: Service
apiVersion: v1
metadata:
  name: traefik-ingress-service
  namespace: kube-system
spec:
  selector:
    k8s-app: traefik-ingress-lb
  ports:
    - protocol: TCP
      port: 80
      name: web
    - protocol: TCP
      port: 8080
      name: admin
  type: NodePort</pre>
<p>
	# vi traefik-rbac.yaml
</p>
<pre class="prettyprint lang-bsh">---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: traefik-ingress-controller
rules:
  - apiGroups:
      - ""
    resources:
      - services
      - endpoints
      - secrets
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - extensions
    resources:
      - ingresses
    verbs:
      - get
      - list
      - watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: traefik-ingress-controller
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: traefik-ingress-controller
subjects:
- kind: ServiceAccount
  name: traefik-ingress-controller
  namespace: kube-system</pre>
<p>
	这样子我们就完成了K8S脚本的初始修改, 接下来我们就可以直接开始我们的Traefik安装
</p>
<p>
	
</p>
<p>
	<span style="color:#E53333;"><span style="color:#000000;"><span style="color:#337FE5;font-size:14px;">3. Traefik安装</span><br />
</span></span>
</p>
<p>
	<span style="color:#E53333;"><span style="color:#000000;"><span style="color:#337FE5;"><span style="color:#000000;"># kubectl apply -f traefik-rbac.yaml<br />
</span></span></span></span>
</p>
<pre class="prettyprint lang-bsh">clusterrole "traefik-ingress-controller" created
clusterrolebinding "traefik-ingress-controller" created</pre>
<p># kubectl apply -f traefik-deployment.yaml</p>
<pre class="prettyprint">serviceaccount "traefik-ingress-controller" created
deployment "traefik-ingress-controller" created
service "traefik-ingress-service" created</pre>
<p># kubectl apply -f ui.yaml</p>
<pre class="prettyprint lang-bsh">service "traefik-web-ui" created
ingress "traefik-web-ui" created</pre>
<p>
	#&nbsp;kubectl get pods --all-namespaces -o wide
</p>
<pre class="prettyprint lang-bsh">NAME                                          READY     STATUS    RESTARTS   AGE       IP             NODE
…
traefik-ingress-controller-795ffb7d78-ppw94   1/1       Running   0          5m        10.110.16.15   kube-node1</pre>
<p>
	<span style="color:#E53333;">可以看到我们的traefik-ingress pods成功的打开了我们的hostnetwork, 这里显示的是我们的node ip而不是pod内网 ip, 也就是我们可以直接通过访问node ip 10.110.16.15来连接traefik反向代理, 利用traefik来分配我们的K8S的容器连接.</span>
</p>
<p>
	<span style="color:#E53333;"><span style="color:#000000;"># kubectl get service --all-namespaces</span></span>
</p>
<p>
	<span style="color:#E53333;"><span style="color:#000000;"> </span></span>
</p>
<pre class="prettyprint lang-js">NAMESPACE      NAME                        TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                       AGE
default        frontend                    ClusterIP   10.110.87.1      &lt;none&gt;        80/TCP                        4h
default        kubernetes                  ClusterIP   10.96.0.1        &lt;none&gt;        443/TCP                       1d
default        my-nginx                    ClusterIP   10.102.200.83    &lt;none&gt;        80/TCP                        4h
kube-system    kube-dns                    ClusterIP   10.96.0.10       &lt;none&gt;        53/UDP,53/TCP                 1d
kube-system    kubernetes-dashboard        NodePort    10.108.187.244   &lt;none&gt;        443:32666/TCP                 1d
kube-system    tiller-deploy               ClusterIP   10.111.251.3     &lt;none&gt;        44134/TCP                     1d
kube-system    traefik-ingress-service     NodePort    10.100.70.138    &lt;none&gt;        80:31087/TCP,8080:31017/TCP   13m
kube-system    traefik-web-ui              ClusterIP   10.96.54.167     &lt;none&gt;        80/TCP                        13m
newegg-nginx   newegg-nginx-newegg-nginx   NodePort    10.97.200.124    &lt;none&gt;        80:32239/TCP                  8h</pre>
<p><span style="color:#000000;"></span> </p>
<p>
	我们可以通过两种方式去访问我们的traefik后台管理员界面.
</p>
<p>
	<span style="color:#337FE5;">1).通过Node Port方式</span>
</p>
<p>
	我们可以在与K8S node同网段的机器去访问其Node暴露的端口31017
</p>
<p>
	<a href="http://www.showerlee.com/?attachment_id=2729"><img onerror="javascript:this.src='http://www.showerlee.com/wp-content/themes/BYMT/images/images_error.jpg'" src="http://www.showerlee.com/wp-content/uploads/2018/09/traefik1.png" alt="" width="750" height="454" class="alignnone size-full wp-image-2729" title="" align="" /></a>
</p>
<p>
	
</p>
<p>
	<span style="color:#337FE5;">2).直接通过ui.yaml配置traefik+ingress反向代理, 这样我们就可以通过host域名直接访问后台</span>
</p>
<p>
	# vi ui.yaml
</p>
<pre class="prettyprint lang-bsh">---
apiVersion: v1
kind: Service
metadata:
  name: traefik-web-ui
  namespace: kube-system
spec:
  selector:
    k8s-app: traefik-ingress-lb
  ports:
  - name: web
    port: 80
    targetPort: 8080
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: traefik-web-ui
  namespace: kube-system
spec:
  rules:
  - host: traefik-ui.k8s
    http:
      paths:
      - path: /
        backend:
          serviceName: traefik-web-ui
          servicePort: web</pre>
<p>
	在本地MacOS系统添加<span>traefik-ui.k8s解析到10.110.16.15</span>
</p>
<p>
	# echo "10.110.16.15 traefik-ui.k8s" &gt;&gt; /etc/hosts
</p>
<p>
	我们可以同样利用traefik反向代理, 通过http域名的方式访问admin后台.
</p>
<p>
	<a href="http://www.showerlee.com/?attachment_id=2732"><img onerror="javascript:this.src='http://www.showerlee.com/wp-content/themes/BYMT/images/images_error.jpg'" src="http://www.showerlee.com/wp-content/uploads/2018/09/traefik2.png" alt="" width="750" height="452" class="alignnone size-full wp-image-2732" title="" align="" /></a>
</p>
<p>
	
</p>
<p>
	<span style="color:#337FE5;font-size:14px;">4.部署两个服务nginx1-7和nginx1-8，配置Traefik去负载这两个服务：</span>
</p>
<p>
	配置一个deployment类型的1.7版本nginx实例, 并利用service开启其内部80端口监听.
</p>
<p>
	#&nbsp;vi nginx1-7.yaml
</p>
<pre class="prettyprint lang-bsh">apiVersion: v1
kind: Service
metadata:
  name: frontend
spec:
  ports:
    - port: 80
      targetPort: 80
  selector:
    app: nginx1-7
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: nginx1-7-deployment
spec:
  replicas: 2
  template:
    metadata:
      labels:
        app: nginx1-7
    spec:
      containers:
      - name: nginx
        image: nginx:1.7.9
        ports:
        - containerPort: 80</pre>
<p>
	<span><span>配置一个deployment类型的1.8版本nginx实例, 并利用service开启其内部80端口监听.</span><br />
</span>
</p>
<p>
	<span>#&nbsp;</span><span>vi nginx1-8.yaml</span>
</p>
<p>
	<span> </span>
</p>
<pre class="prettyprint lang-bsh">apiVersion: v1
kind: Service
metadata:
  name: my-nginx
spec:
  ports:
    - port: 80
      targetPort: 80
  selector:
    app: nginx1-8
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: nginx1-8-deployment
spec:
  replicas: 2
  template:
    metadata:
      labels:
        app: nginx1-8
    spec:
      containers:
      - name: nginx
        image: nginx:1.8
        ports:
        - containerPort: 80</pre>
<p>
	通过利用Ingress绑定1.7与1.8nginx实例下的serviceName, 给其定义不同的域名, 实现traefik反向代理.
</p>
<p>
	# vi traefik.yaml
</p>
<pre class="prettyprint lang-bsh">apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: traefik-ingress
  namespace: default
spec:
  rules:
  - host: traefik.nginx.io
    http:
      paths:
      - path: /
        backend:
          serviceName: my-nginx
          servicePort: 80
  - host: traefik.frontend.io
    http:
      paths:
      - path: /
        backend:
          serviceName: frontend
          servicePort: 80</pre>
<p>
	<span style="color:#E53333;">我们这里通过配置MacOS本地Host DNS, 将所有的DNS记录指向我们暴露在外网的traefik IP, 从而让traefik作为代理接管所有访问K8S内部实例的连接, 实际的原理和我们之前用过的apache, nginx基本一致.</span>
</p>
<p>
	<span># echo "10.110.16.15 traefik.nginx.io traefik.frontend.io" &gt;&gt; /etc/hosts</span>
</p>
<p>
	这样子我们就可以直接在MacOS浏览器下访问<span>traefik.nginx.io与</span><span>traefik.frontend.io从而实现traefik反向代理.</span>
</p>
<p>
	<a href="http://www.showerlee.com/?attachment_id=2742"><img onerror="javascript:this.src='http://www.showerlee.com/wp-content/themes/BYMT/images/images_error.jpg'" src="http://www.showerlee.com/wp-content/uploads/2018/09/traefik3.png" alt="" width="800" height="221" class="alignnone size-full wp-image-2742" title="" align="" /></a> <a href="http://www.showerlee.com/?attachment_id=2743"><img onerror="javascript:this.src='http://www.showerlee.com/wp-content/themes/BYMT/images/images_error.jpg'" src="http://www.showerlee.com/wp-content/uploads/2018/09/traefik4.png" alt="" width="800" height="226" class="alignnone size-large wp-image-2743" title="" align="" /></a>
</p>
<p>
	
</p>
<p>
	<span style="color:#337FE5;font-size:14px;">5.配置HTTPS访问traefik管理员后台</span>
</p>
<p>
	<span style="color:#337FE5;">1).创建并配置https证书并保存到k8s secret下.</span>
</p>
<p>
	# mkdir -p /opt/k8s/ssl
</p>
<p>
	# cd /opt/k8s/ssl
</p>
<p>
	# openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout tls.key -out tls.crt -subj "/CN=traefik-ui.k8s"
</p>
<p>
	# kubectl create secret generic traefik-cert --from-file=tls.crt --from-file=tls.key -n kube-system
</p>
<p>
	
</p>
<p>
	<span style="color:#337FE5;">2).配置traefik反向代理并保存到k8s configmap下.</span>
</p>
<p>
	# mkdir -p /opt/k8s/conf/
</p>
<p>
	# cd /opt/k8s/ssl/
</p>
<p>
	# vi traefik-ui.toml
</p>
<pre class="prettyprint">defaultEntryPoints = ["http","https"]
[entryPoints]
  [entryPoints.http]
  address = ":80"
    [entryPoints.http.redirect]
    regex = "^http://traefik-ui.k8s/(.*)"
    replacement = "https://traefik-ui.k8s/$1"
  [entryPoints.https]
  address = ":443"
    [entryPoints.https.tls]
      [[entryPoints.https.tls.certificates]]
      certFile = "/opt/k8s/ssl/tls.crt"
      keyFile = "/opt/k8s/ssl/tls.key"</pre>
<p>
	# kubectl create configmap traefik-ui-conf --from-file=traefik-ui.toml -n kube-system
</p>
<p>
	<span style="color:#E53333;">这样子我们就成功配置traefik反向代理, 仅使</span><span style="color:#E53333;"><a href="http://traefik-ui.k8s会重定向到" rel="nofollow">http://traefik-ui.k8s会重定向到</a></span><span style="color:#E53333;"><a href="https://traefik-ui.k8s" rel="nofollow">https://traefik-ui.k8s</a>, 实现https加密访问.</span>
</p>
<p>
<span style="color:#337FE5;">3).创建traefik deployment实例并支持https反向代理.</span> </p>
<p>
	# vi traefik-deployment-ssl.yaml&nbsp;
</p>
<pre class="prettyprint">---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: traefik-ingress-controller
  namespace: kube-system
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: traefik-ingress-lb
  namespace: kube-system
  labels:
    k8s-app: traefik-ingress-lb
spec:
  replicas: 1
  selector:
    matchLabels:
      k8s-app: traefik-ingress-lb
  template:
    metadata:
      labels:
        k8s-app: traefik-ingress-lb
        name: traefik-ingress-lb
    spec:
      terminationGracePeriodSeconds: 60
      hostNetwork: true
      restartPolicy: Always
      serviceAccountName: traefik-ingress-controller
      volumes:
      - name: ssl
        secret:
          secretName: traefik-cert
      - name: config
        configMap:
          name: traefik-ui-conf
      containers:
      - image: traefik
        name: traefik-ingress-lb
        volumeMounts:
        - mountPath: "/opt/k8s/ssl"
          name: "ssl"
        - mountPath: "/opt/k8s/conf"
          name: "config"
        ports:
        - name: http
          containerPort: 80
          hostPort: 80
        - name: https
          containerPort: 443
        - name: admin
          containerPort: 8080
          hostPort: 8080
        args:
        - --configFile=/opt/k8s/conf/traefik-ui.toml
        - --web
        - --web.address=:8080
        - --kubernetes
---
kind: Service
apiVersion: v1
metadata:
  name: traefik-ingress-service
  namespace: kube-system
spec:
  selector:
    k8s-app: traefik-ingress-lb
  ports:
    - protocol: TCP
      port: 80
      name: web
    - protocol: TCP
      port: 443
      name: https
    - protocol: TCP
      port: 8080
      name: admin
  type: NodePort</pre>
<p>
	<span style="color:#E53333;">这里我们在原来的traefik deployment实例基础上将之前配置的secret与configmap导入该deployment下, 并mount ssl与conf目录用来让traefik获取我们在k8s node目录下保存的相应SSL证书与traefik反向代理配置.</span>
</p>
<p>
	<span style="color:#E53333;">最后我们通过service类型打开我们的deployment 443端口.</span>
</p>
<p>
	
</p>
<p>
	4).创建traefik管理员界面的ingress, 从而实现https域名访问.
</p>
<p># vi ui-ssl.yaml</p>
<pre class="prettyprint">---
apiVersion: v1
kind: Service
metadata:
  name: traefik-web-ui
  namespace: kube-system
spec:
  selector:
    k8s-app: traefik-ingress-lb
  ports:
  - name: web
    port: 80
    targetPort: 8080
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: traefik-web-ui
  namespace: kube-system
  annotations:
    kubernetes.io/ingress.class: traefik
spec:
  tls:
    - secretName: traefik-cert
  rules:
  - host: traefik-ui.k8s
    http:
      paths:
      - path: /
        backend:
          serviceName: traefik-web-ui
          servicePort: web</pre>
<p>
	最后我们访问https://traefik-ui.k8s 验证结果.
</p>
<p>
	<a href="http://www.showerlee.com/archives/2701/traefik5"><img onerror="javascript:this.src='http://www.showerlee.com/wp-content/themes/BYMT/images/images_error.jpg'" src="http://www.showerlee.com/wp-content/uploads/2018/09/traefik5.png" alt="" width="800" height="473" class="alignnone size-full wp-image-2774" title="" align="" /></a>
</p>
<p>
	
</p>
<p>
	这样子我们就成功的开启了traefik后台admin界面的https加密连接...
</p>
<p>
	
</p>
<p>
	更多traefik+ingress相关内容:&nbsp;<a href="https://docs.traefik.io/user-guide/kubernetes/" rel="nofollow">https://docs.traefik.io/user-guide/kubernetes/</a></p>
<div>声明: 本文采用 <a rel="external" href="http://creativecommons.org/licenses/by-nc-sa/3.0/deed.zh" title="署名-非商业性使用-相同方式共享 3.0 Unported">CC BY-NC-SA 3.0</a> 协议进行授权</div><div>转载请注明来源：<a rel="external" title="DevOps技术分享" href="http://www.showerlee.com/archives/2701">DevOps技术分享</a></div><div>本文链接地址：<a rel="external" title="Kubernetes之Ingress+Traefik" href="http://www.showerlee.com/archives/2701">http://www.showerlee.com/archives/2701</a></div>]]></content:encoded>
			<wfw:commentRss>http://www.showerlee.com/archives/2701/feed</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Jenkins-Pipeline-CI-CD-with-Helm-on-Kubernetes自动化流水线</title>
		<link>http://www.showerlee.com/archives/2661</link>
		<comments>http://www.showerlee.com/archives/2661#comments</comments>
		<pubDate>Thu, 06 Sep 2018 03:40:13 +0000</pubDate>
		<dc:creator>showerlee</dc:creator>
				<category><![CDATA[DevTools]]></category>
		<category><![CDATA[Docker]]></category>
		<category><![CDATA[Jenkins]]></category>
		<category><![CDATA[Kubernetes]]></category>

		<guid isPermaLink="false">http://www.showerlee.com/?p=2661</guid>
		<description><![CDATA[因为忙于家里事情, 很久没有更新我的博客, 这里我将这半年多对Jenkins pipeline集成k8s实现自 [&#8230;]]]></description>
				<content:encoded><![CDATA[<p>
	因为忙于家里事情, 很久没有更新我的博客, 这里我将这半年多对Jenkins pipeline集成k8s实现自动化部署流水线的心得在这里分享给大家, 有不足之处, 还请大家多多指正.
</p>
<p>
	
</p>
<p>
	这里简单的介绍一下我们这个自动化流水线所使用到的工具:
</p>
<p>
	Jenkins Pipeline: 目前国内外DevOps, CI/CD比较主流的一种将我们软件开发周期所涉及到的环节通过pipeline流水线完美的串联在一起的自动化部署框架. 它提出了一种pipeline as a code的概念, 意在将我们的所有软件开发周期中涉及到的所有步骤(版本控制 - 代码检查 - 编译 - 单元测试 - 打包 - 测试环境部署 -&nbsp;集成测试 - 功能测试 - 生产环境部署)通过代码的方式推送给我们的Jenkins进行pipeline自动化部署, 实现将我们的自动化流水线的配置代码化, 并同样实现版本控制.&nbsp;
</p>
<p>
	目前Jenkins官方有两种pipeline的写法, 一种叫做Declarative Pipeline, 另外一种叫做Scripted Pipeline. 前者较为方便阅读, 适合初学者入门, 但相对对集成模块的兼容性以及pipeline逻辑编写的功能性较后者相对弱一些, 后者较为专业, 可以实现逻辑编写, 并支持很多主流集成模块, 推荐具有groovy脚本编写经验的人员使用.
</p>
<p>
	具体内容详见: <a href="http://www.showerlee.com/archives/1972" rel="nofollow">http://www.showerlee.com/archives/1972</a>
</p>
<p>
	
</p>
<p>
	Kubernetes: 目前这几年流行的一种容器化管理系统,&nbsp;<span style="color:#111111;font-family:Helvetica;font-size:13px;background-color:#FFFFFF;">用于自动部署、扩展和管理容器化（containerized）应用程序的开源系统。它旨在提供“跨主机集群的自动部署、扩展以及运行应用程序容器的平台”。它支持一系列容器工具, 目前主流会使用Docker作为他的主流配置容器.使用它的原因也在于它将是目前DevOps领域的一个主流的Docker容器管理系统, 目前国内外很多公司都即将或者已经把自己的传统的虚拟机架构转向为Docker微服务架构, 这里我们非常有必要去学习如何基于k8s下去做自动化部署.</span>
</p>
<p>
	<span style="color:#111111;font-family:Helvetica;font-size:13px;background-color:#FFFFFF;">具体内容详见:&nbsp;<a href="http://www.showerlee.com/archives/2200" rel="nofollow">http://www.showerlee.com/archives/2200</a></span>
</p>
<p>
	
</p>
<p>
	<span style="color:#111111;font-family:Helvetica;font-size:13px;background-color:#FFFFFF;">Helm: 这个应该算作k8s的一个功能性的工具, 它作为k8s的包管理工具, 非常方便的帮助我们整合k8s下零散的部署脚本为一个具体的项目包,&nbsp;<span style="font-family:Helvetica;font-size:13px;vertical-align:baseline;color:#111111;background-color:#FFFFFF;">最终</span><span style="color:#111111;font-family:Helvetica;font-size:13px;background-color:#FFFFFF;">简化了Kubernetes部署应用的版本控制、打包、发布、删除、更新等操作</span>.</span>
</p>
<p>
	<span><span style="font-size:13px;background-color:#FFFFFF;">具体内容详见:&nbsp;<a href="http://www.showerlee.com/archives/2455" rel="nofollow">http://www.showerlee.com/archives/2455</a></span></span>
</p>
<p>
	
</p>
<p>
	<span><span style="font-size:13px;background-color:#FFFFFF;">这里本次自动化流水线实现的部署内容就是通过编写pipeline脚本, 将一个从Docker官方拿到的centos docker镜像容器, 进行二次安装配置, 打包为一个nignx静态网站容器, 发布到我们k8s下, 期间我们会对这个容器进行一些常规的功能测试, 从而模拟我们在k8s下实现自动化流水线交付的完整架构.</span></span>
</p>
<p>
	
</p>
<p>
	<span style="font-size:13px;background-color:#FFFFFF;">本次内容涉及到的代码可以参考我的github仓库:</span>
</p>
<p>
	<span style="font-size:13px;background-color:#FFFFFF;"><a href="https://github.com/showerlee/Jenkins-Pipeline-CI-CD-with-Helm-on-Kubernetes" rel="nofollow">https://github.com/showerlee/Jenkins-Pipeline-CI-CD-with-Helm-on-Kubernetes</a><br />
</span>
</p>
<p>
	<span><span style="font-size:13px;background-color:#FFFFFF;"><br />
</span></span>
</p>
<p>
	<span><span style="font-size:13px;background-color:#FFFFFF;">Okay, Let's roll out...</span></span>
</p>
<p>
	<span><span style="font-size:13px;background-color:#FFFFFF;"><br />
</span></span>
</p>
<p>
	<span><span style="font-size:13px;background-color:#FFFFFF;"> </span></span>
</p>
<p>
	<span style="color:#337FE5;font-size:16px;"><span style="font-family:Helvetica;background-color:#FFFFFF;"><strong>安装环境</strong></span></span>
</p>
<p>
	Local Desktop: MacOS
</p>
<p>
	Virtual Machine: Virtual Box
</p>
<p>
	Virtual System: CentOS 7.4
</p>
<p>
	Jenkins: Jenkins 2.138
</p>
<p>
	Kubernetes: Kubernetes 1.9
</p>
<p>
	Docker:&nbsp;17.03.2-ce
</p>
<p>
	Helm:&nbsp;helm-v2.7.0
</p>
<p>
	kube-master 10.110.16.14
</p>
<p>
	kube-node-1 10.110.16.15
</p>
<p>
	
</p>
<p>
	<span style="color:#337FE5;font-family:Helvetica;font-size:16px;background-color:#FFFFFF;"><strong>一. 系统环境配置</strong></span>
</p>
<p>
	<span style="color:#337FE5;">1.</span><span style="color:#337FE5;">关闭SELINUX和firewall</span>
</p>
<p>
	# vi /etc/sysconfig/selinux
</p>
<pre class="prettyprint lang-bsh">...
SELINUX=disabled 
...</pre>
<p><span># setenforce 0</span></p>
<p>
	# systemctl stop firewalld&nbsp; &amp;&amp; systemctl disable firewalld
</p>
<p>
	
</p>
<p>
	<span style="color:#337FE5;">2</span><span style="color:#337FE5;">.安装k8s环境.</span>
</p>
<p><a href="http://www.showerlee.com/archives/2200" rel="nofollow">http://www.showerlee.com/archives/2200</a></p>
<div>
	
</div>
<p><span style="background-color:#FFFFFF;color:#337FE5;">3.安装helm环境.</span> </p>
<p><a href="http://www.showerlee.com/archives/2455" rel="nofollow">http://www.showerlee.com/archives/2455</a></p>
<p>
	
</p>
<p>
	<span style="color:#337FE5;">4.安装Jenkins环境.</span>
</p>
<p><a href="http://www.showerlee.com/archives/1880" rel="nofollow">http://www.showerlee.com/archives/1880</a></p>
<p>
	
</p>
<p>
	<strong><span style="font-size:18px;color:#337FE5;">二. Jenkins Pipeline配置</span></strong><span style="font-size:16px;color:#337FE5;"></span>
</p>
<p>
	<span style="color:#337FE5;">1.将jenkins用户添加到默认docker用户组下, 从而保证jenkins可以直接访问</span><span style="color:#337FE5;">/var/run/docker.sock</span>
</p>
<p>
	# usermod -a -G docker jenkins
</p>
<p>
	
</p>
<p>
	<span style="color:#337FE5;">2.更改全局安全配置, 保证pipeline可以直接调用helm</span>
</p>
<p>
	进入Jenkins --&gt; Manage Jenkins --&gt; Configure Global Security<br />
在<span>Authorization下选择</span>Project-based Matrix Authorization Strategy
</p>
<p>
	配置<span>Anonymous User具有Read Jenkins Jobs的权限</span>
</p>
<p>
	如图:
</p>
<p>
	<span><span style="font-size:13px;background-color:#FFFFFF;"></span></span>
</p>
<p>
	<a href="http://www.showerlee.com/?attachment_id=2680"><img onerror="javascript:this.src='http://www.showerlee.com/wp-content/themes/BYMT/images/images_error.jpg'" src="http://www.showerlee.com/wp-content/uploads/2018/09/permision.png" alt="" width="700" height="326" class="alignnone size-full wp-image-2680" title="" align="" /></a>
</p>
<p>
	
</p>
<p>
	<span style="color:#337FE5;">3.配置jenkins用户加载kubectl环境变量.</span>
</p>
<p>
	#&nbsp;mkdir -p /home/jenkins/.kube
</p>
<p>
	# cp -i /etc/kubernetes/admin.conf /home/jenkins/.kube/config
</p>
<p>
	# chown jenkins:jenkins /home/jenkins/.kube/config
</p>
<p>
	
</p>
<p>
	<span style="color:#337FE5;">4.配置github与dockerhub的Jenkins账户凭证, 后面pipeline对应模块需要调用.</span>
</p>
<p>
	<a href="http://www.showerlee.com/?attachment_id=2687"><img onerror="javascript:this.src='http://www.showerlee.com/wp-content/themes/BYMT/images/images_error.jpg'" src="http://www.showerlee.com/wp-content/uploads/2018/09/credential.png" alt="" width="700" height="326" class="alignnone size-full wp-image-2687" title="" align="" /></a><span></span><a href="http://www.showerlee.com/?attachment_id=2688"><img onerror="javascript:this.src='http://www.showerlee.com/wp-content/themes/BYMT/images/images_error.jpg'" src="http://www.showerlee.com/wp-content/uploads/2018/09/credential1.png" alt="" width="700" height="289" class="alignnone size-large wp-image-2688" title="" align="" /></a>
</p>
<p>
	
</p>
<p>
	
</p>
<p>
	<span style="color:#337FE5;">5.创建pipeline任务</span>
</p>
<p>
	<a href="http://www.showerlee.com/?attachment_id=2683"><img onerror="javascript:this.src='http://www.showerlee.com/wp-content/themes/BYMT/images/images_error.jpg'" src="http://www.showerlee.com/wp-content/uploads/2018/09/pipeline1.png" alt="" width="800" height="441" class="alignnone size-full wp-image-2683" title="" align="" /></a>
</p>
<pre class="prettyprint">#!groovy

def kubectlTest() {
    // Test that kubectl can correctly communication with the Kubernetes API
    echo "running kubectl test"
    sh "kubectl get nodes"

}

def helmLint(String chart_dir) {
    // lint helm chart
    sh "/usr/local/bin/helm lint ${chart_dir}"

}

def helmDeploy(Map args) {
    //configure helm client and confirm tiller process is installed

    if (args.dry_run) {
        println "Running dry-run deployment"

        sh "/usr/local/bin/helm upgrade --dry-run --debug --install ${args.name} ${args.chart_dir} --set ImageTag=${args.tag},Replicas=${args.replicas},Cpu=${args.cpu},Memory=${args.memory},DomainName=${args.name} --namespace=${args.name}"
    } else {
        println "Running deployment"
        sh "/usr/local/bin/helm upgrade --install ${args.name} ${args.chart_dir} --set ImageTag=${args.tag},Replicas=${args.replicas},Cpu=${args.cpu},Memory=${args.memory},DomainName=${args.name} --namespace=${args.name}"

        echo "Application ${args.name} successfully deployed. Use helm status ${args.name} to check"
    }
}



timeout(time: 2000, unit: 'SECONDS') {
    node {
        println "----------------------------------------------------------------------------"
        stage 'Check out pipeline from GitHub Repo'
        //git url: 'https://github.com/showerlee/Jenkins-Pipeline-CI-CD-with-Helm-on-Kubernetes.git'
        git branch: 'master',
            credentialsId: 'showerlee-github',
            url: 'https://github.com/showerlee/Jenkins-Pipeline-CI-CD-with-Helm-on-Kubernetes.git'

        // Setup the Docker Registry (Docker Hub) + Credentials 
        registry_url = "https://index.docker.io/v1/" // Docker Hub
        docker_creds_id = "showerlee-dockerhub" // name of the Jenkins Credentials ID

        def pwd = pwd()
        def chart_dir = "${pwd}/charts/newegg-nginx"

        // Add build tag version
        Properties props = new Properties()
        File propsFile = new File("${pwd}/promote.properties")
        props.load(propsFile.newDataInputStream())
        def build_tag_raw = props.getProperty('BUILD_TAG')
        float build_tag = Float.parseFloat(build_tag_raw)+0.1;
        println("Set current build_tag="+build_tag+" temporarily")

        //Set build_tag to index.html
        sh """
        echo "&lt;h1&gt;Welcome Newegg Nginx Test Version: ${build_tag}&lt;/h1&gt;" &gt; index.html
        """

        def inputFile = readFile('config.json')
        def config = new groovy.json.JsonSlurperClassic().parseText(inputFile)
        println "pipeline config ==&gt; ${config}"
        println "----------------------------------------------------------------------------"
        
        stage 'Register DockerHub'
        echo "[INFO] Register Dockerhub"
        docker.withRegistry("${registry_url}", "${docker_creds_id}") {
        
            // Set up the container to build 
            maintainer_name = "showerlee"
            container_name = "nginx-test"
            println "----------------------------------------------------------------------------"

            stage "Build Nginx Container"
            echo "[INFO] Building Nginx with docker.build(${maintainer_name}/${container_name}:${build_tag})"
            container = docker.build("${maintainer_name}/${container_name}:${build_tag}", '.')
            println "----------------------------------------------------------------------------"
            try {
                
                // Start Testing
                stage "Spin up Nginx Container"
                echo "[INFO] Spin up Nginx Container"
                
                // Run the container with the env file, mounted volumes and the ports:
                docker.image("${maintainer_name}/${container_name}:${build_tag}").withRun("--name=${container_name}  -p 80:80 ")  { c -&gt;
                       
                    // wait for the django server to be ready for testing
                    // the 'waitUntil' block needs to return true to stop waiting
                    // in the future this will be handy to specify waiting for a max interval: 
                    // <a href="https://issues.jenkins-ci.org/browse/JENKINS-29037" rel="nofollow">https://issues.jenkins-ci.org/browse/JENKINS-29037</a>
                    //
                    waitUntil {
                        sh """
                        set +x
                        ss -antup | grep :::80[^0-9] | grep LISTEN | wc -l | tr -d '\n' &gt; /tmp/wait_results
                        set -x
                        """
                        wait_results = readFile '/tmp/wait_results'

                        echo "[INFO] Wait Results(${wait_results})"
                        if ("${wait_results}" == "1")
                        {
                            echo "[INFO] Nginx is listening on port 80"
                            sh "rm -f /tmp/wait_results"
                            return true
                        }
                        else
                        {
                            echo "[INFO] Nginx is not listening on port 80 yet"
                            return false
                        }
                    } // end of waitUntil
                    
                    // At this point Nginx is running
                    echo "[INFO] Docker Container is running"
                    input 'You can check the running container on docker build server now! Click Proceed to next stage...'    
                    // this pipeline is using 3 tests 
                    // by setting it to more than 3 you can test the error handling and see the pipeline Stage View error message
                    MAX_TESTS = 3
                    for (test_num = 1; test_num &lt;= MAX_TESTS; test_num++) {     
                        println "----------------------------------------------------------------------------"   
                        echo "Running Test(${test_num})"
                    
                        expected_results = 0
                        if (test_num == 1 ) 
                        {
                            // Test we can download the home page from the running docker container
                            echo "[INFO] Check validation of home page"
                            sh """
                            set +x
                            docker exec -t ${container_name} curl -s <a href="http://localhost" rel="nofollow">http://localhost</a> | grep Welcome | wc -l | tr -d '\n' &gt; /tmp/test_results
                            set -x
                            """
                            expected_results = 1
                        }
                        else if (test_num == 2)
                        {
                            // Test if port 80 is exposed
                            echo "[INFO] Check if port 80 is exposed"
                            sh """
                            set +x
                            docker inspect --format '{{ (.NetworkSettings.Ports) }}' ${container_name}
                            docker inspect --format '{{ (.NetworkSettings.Ports) }}' ${container_name} | grep map | grep '80/tcp:' | wc -l | tr -d '\n' &gt; /tmp/test_results
                            set -x
                            """
                            expected_results = 1
                        }
                        else if (test_num == 3)
                        {
                            // Test there's nothing established on the port since nginx is not running:
                            echo "[INFO] Check if nothing established from nginx container"
                            sh """
                            set +x
                            docker exec -t ${container_name} ss -apn | grep 80 | grep ESTABLISHED | wc -l | tr -d '\n' &gt; /tmp/test_results
                            set -x
                            """
                            expected_results = 0
                        }
                        else
                        {
                            err_msg = "Missing Test(${test_num})"
                            echo "[ERROR] ${err_msg}"
                            currentBuild.result = 'FAILURE'
                            error "Failed to finish container testing with Message(${err_msg})"
                        }
                        
                        // Now validate the results match the expected results
                        stage "Test(${test_num}) - Validate Results"
                        test_results = readFile '/tmp/test_results'
                        echo "[INFO] Test(${test_num}) Results($test_results) == Expected(${expected_results})"
                        sh """
                        set +x
                        if [ \"${test_results}\" != \"${expected_results}\" ]; 
                        then 
                            echo \" --------------------- Test(${test_num}) Failed--------------------\"
                            echo \" - Test(${test_num}) Failed\"
                            exit 1
                        else 
                            echo \" - Test(${test_num}) Passed\"
                            exit 0
                        fi
                        set -x
                        """

                        echo "[INFO] Finished Running Test(${test_num})"
                    
                        // cleanup after the test run
                        sh "rm -f /tmp/test_results"
                        currentBuild.result = 'SUCCESS'
                    }
                }
                
            } catch (Exception err) {
                err_msg = "Test had Exception(${err})"
                currentBuild.result = 'FAILURE'
                error "FAILED - Stopping build for Error(${err_msg})"
            }
            println "----------------------------------------------------------------------------"
            stage "Push to DockerHub"
            input 'Do you approve to push?'
            container.push()
            currentBuild.result = 'SUCCESS'
            println "----------------------------------------------------------------------------"
            stage "Push properties to git repo"
            echo "Push current build_tag="+build_tag+" to git repo"
            withCredentials([usernamePassword(credentialsId: 'showerlee-github', passwordVariable: 'GIT_PASSWORD', usernameVariable: 'GIT_USERNAME')]) {
                sh """
                set +x
                git --version
                echo 'BUILD_TAG=${build_tag}' &gt; ${pwd}/promote.properties
                git config --global user.email "showerlee@vip.qq.com"
                git config --global user.name "showerlee"
                git add ${pwd}/promote.properties ${pwd}/index.html
                git commit -m"Update docker tag to ${build_tag}"
                git push --set-upstream <a href="https://$" rel="nofollow">https://$</a>{GIT_USERNAME}:${GIT_PASSWORD}@github.com/showerlee/Jenkins-Pipeline-CI-CD-with-Helm-on-Kubernetes.git master
                set -x
                """
            }
            println "----------------------------------------------------------------------------"
            
        }
        
        stage ('helm test') { 
            echo "[INFO] Start helm test"   
            // run helm chart linter
            echo "[INFO] Run helm chart linter"
            helmLint(chart_dir)

            // dry-run helm chart installation
            echo "[INFO] Dry-run helm chart installation"
            helmDeploy(
                dry_run       : true,
                name          : config.app.name,
                chart_dir     : chart_dir,
                tag           : build_tag,
                replicas      : config.app.replicas,
                cpu           : config.app.cpu,
                memory        : config.app.memory
            )
            println "----------------------------------------------------------------------------"
        }
        
        stage ('helm deploy') {
            input 'Do you approve to deploy?'
            echo "[INFO] Start helm deployment"
            // Deploy using Helm chart
            helmDeploy(
                dry_run       : false,
                name          : config.app.name,
                chart_dir     : chart_dir,
                tag           : build_tag,
                replicas      : config.app.replicas,
                cpu           : config.app.cpu,
                memory        : config.app.memory
            )
            echo "[INFO] Deployment Finished..."
        }
        
        ///////////////////////////////////////
        //
        // Coming Soon Feature Enhancements
        //
        // 1. Add Docker Compose testing as a new Pipeline item that is initiated after this one for "Integration" testing
        // 2. Make sure to set the Pipeline's "Throttle builds" to 1 because the docker containers will collide on resources like ports and names
        // 3. Should be able to parallelize the docker.withRegistry() methods to ensure the container is running on the slave
        // 4. After the tests finish (and before they start), clean up container images to prevent stale docker image builds from affecting the current test run
    }
}</pre>
<p>
	
</p>
<p>
	<span style="color:#337FE5;">6.执行pipeline任务构建</span>
</p>
<p>
	<a href="http://www.showerlee.com/?attachment_id=2690"><img onerror="javascript:this.src='http://www.showerlee.com/wp-content/themes/BYMT/images/images_error.jpg'" src="http://www.showerlee.com/wp-content/uploads/2018/09/pipeline3.png" alt="" width="800" height="329" class="alignnone size-full wp-image-2690" title="" align="" /></a>
</p>
<pre class="prettyprint">Started by user admin
Obtained Jenkinsfile from git <a href="https://github.com/showerlee/Jenkins-Pipeline-CI-CD-with-Helm-on-Kubernetes.git" rel="nofollow">https://github.com/showerlee/Jenkins-Pipeline-CI-CD-with-Helm-on-Kubernetes.git</a>
[Pipeline] timeout
Timeout set to expire in 33 min
[Pipeline] {
[Pipeline] node
Running on Jenkins in /var/jenkins_home/workspace/kube-helm-pipeline@2
[Pipeline] {
[Pipeline] echo
----------------------------------------------------------------------------
[Pipeline] stage (Check out pipeline from GitHub Repo)
Using the ‘stage’ step without a block argument is deprecated
Entering stage Check out pipeline from GitHub Repo
Proceeding
[Pipeline] git
 &gt; git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 &gt; git config remote.origin.url <a href="https://github.com/showerlee/Jenkins-Pipeline-CI-CD-with-Helm-on-Kubernetes.git" rel="nofollow">https://github.com/showerlee/Jenkins-Pipeline-CI-CD-with-Helm-on-Kubernetes.git</a> # timeout=10
Fetching upstream changes from <a href="https://github.com/showerlee/Jenkins-Pipeline-CI-CD-with-Helm-on-Kubernetes.git" rel="nofollow">https://github.com/showerlee/Jenkins-Pipeline-CI-CD-with-Helm-on-Kubernetes.git</a>
 &gt; git --version # timeout=10
using GIT_ASKPASS to set credentials github credential
 &gt; git fetch --tags --progress <a href="https://github.com/showerlee/Jenkins-Pipeline-CI-CD-with-Helm-on-Kubernetes.git" rel="nofollow">https://github.com/showerlee/Jenkins-Pipeline-CI-CD-with-Helm-on-Kubernetes.git</a> +refs/heads/*:refs/remotes/origin/*
 &gt; git rev-parse refs/remotes/origin/master^{commit} # timeout=10
 &gt; git rev-parse refs/remotes/origin/origin/master^{commit} # timeout=10
Checking out Revision ce4fc2593333f3ed89cd5654966919b0aa97628d (refs/remotes/origin/master)
 &gt; git config core.sparsecheckout # timeout=10
 &gt; git checkout -f ce4fc2593333f3ed89cd5654966919b0aa97628d
 &gt; git branch -a -v --no-abbrev # timeout=10
 &gt; git branch -D master # timeout=10
 &gt; git checkout -b master ce4fc2593333f3ed89cd5654966919b0aa97628d
Commit message: "Update docker tag to 1.5"
 &gt; git rev-list --no-walk ce4fc2593333f3ed89cd5654966919b0aa97628d # timeout=10
[Pipeline] pwd
[Pipeline] echo
Set current build_tag=1.6 temporarily
[Pipeline] sh
[kube-helm-pipeline@2] Running shell script
+ echo '&lt;h1&gt;Welcome Newegg Nginx Test Version: 1.6&lt;/h1&gt;'
[Pipeline] readFile
[Pipeline] echo
pipeline config ==&gt; [app:[memory:128Mi, replicas:3, name:newegg-nginx, cpu:10m], pipeline:[library:[branch:master], enabled:true]]
[Pipeline] echo
----------------------------------------------------------------------------
[Pipeline] stage (Register DockerHub)
Using the ‘stage’ step without a block argument is deprecated
Entering stage Register DockerHub
Proceeding
[Pipeline] echo
[INFO] Register Dockerhub
[Pipeline] withEnv
[Pipeline] {
[Pipeline] withDockerRegistry
$ docker login -u showerlee -p ******** <a href="https://index.docker.io/v1/" rel="nofollow">https://index.docker.io/v1/</a>
Login Succeeded
[Pipeline] {
[Pipeline] echo
----------------------------------------------------------------------------
[Pipeline] stage (Build Nginx Container)
Using the ‘stage’ step without a block argument is deprecated
Entering stage Build Nginx Container
Proceeding
[Pipeline] echo
[INFO] Building Nginx with docker.build(showerlee/nginx-test:1.6)
[Pipeline] sh
[kube-helm-pipeline@2] Running shell script
+ docker build -t showerlee/nginx-test:1.6 .
Sending build context to Docker daemon 2.164 MB

Step 1/6 : FROM centos:centos7
 ---&gt; 5182e96772bf
Step 2/6 : MAINTAINER showerlee
 ---&gt; Using cache
 ---&gt; cc77ae5c175a
Step 3/6 : RUN yum -y update         &amp;&amp; yum clean all         &amp;&amp; yum install -y epel-release         &amp;&amp; yum install -y nginx iproute
 ---&gt; Using cache
 ---&gt; 2c718c146ba5
Step 4/6 : EXPOSE 80
 ---&gt; Using cache
 ---&gt; 829b29a719e6
Step 5/6 : COPY index.html /usr/share/nginx/html/
 ---&gt; Using cache
 ---&gt; 22e805b84cee
Step 6/6 : CMD nginx -g daemon off;
 ---&gt; Using cache
 ---&gt; de02105d9033
Successfully built de02105d9033
[Pipeline] dockerFingerprintFrom
[Pipeline] echo
----------------------------------------------------------------------------
[Pipeline] stage (Spin up Nginx Container)
Using the ‘stage’ step without a block argument is deprecated
Entering stage Spin up Nginx Container
Proceeding
[Pipeline] echo
[INFO] Spin up Nginx Container
[Pipeline] sh
[kube-helm-pipeline@2] Running shell script
+ docker run -d --name=nginx-test -p 80:80 showerlee/nginx-test:1.6
[Pipeline] dockerFingerprintRun
[Pipeline] waitUntil
[Pipeline] {
[Pipeline] sh
[kube-helm-pipeline@2] Running shell script
+ set +x
[Pipeline] readFile
[Pipeline] echo
[INFO] Wait Results(1)
[Pipeline] echo
[INFO] Nginx is listening on port 80
[Pipeline] sh
[kube-helm-pipeline@2] Running shell script
+ rm -f /tmp/wait_results
[Pipeline] }
[Pipeline] // waitUntil
[Pipeline] echo
[INFO] Docker Container is running
[Pipeline] input
You can check the running container on docker build server now! Click Proceed to next stage...
Proceed or Abort
Approved by admin
[Pipeline] echo
----------------------------------------------------------------------------
[Pipeline] echo
Running Test(1)
[Pipeline] echo
[INFO] Check validation of home page
[Pipeline] sh
[kube-helm-pipeline@2] Running shell script
+ set +x
[Pipeline] stage (Test(1) - Validate Results)
Using the ‘stage’ step without a block argument is deprecated
Entering stage Test(1) - Validate Results
Proceeding
[Pipeline] readFile
[Pipeline] echo
[INFO] Test(1) Results(1) == Expected(1)
[Pipeline] sh
[kube-helm-pipeline@2] Running shell script
+ set +x
 - Test(1) Passed
[Pipeline] echo
[INFO] Finished Running Test(1)
[Pipeline] sh
[kube-helm-pipeline@2] Running shell script
+ rm -f /tmp/test_results
[Pipeline] echo
----------------------------------------------------------------------------
[Pipeline] echo
Running Test(2)
[Pipeline] echo
[INFO] Check if port 80 is exposed
[Pipeline] sh
[kube-helm-pipeline@2] Running shell script
+ set +x
map[80/tcp:[{0.0.0.0 80}]]
[Pipeline] stage (Test(2) - Validate Results)
Using the ‘stage’ step without a block argument is deprecated
Entering stage Test(2) - Validate Results
Proceeding
[Pipeline] readFile
[Pipeline] echo
[INFO] Test(2) Results(1) == Expected(1)
[Pipeline] sh
[kube-helm-pipeline@2] Running shell script
+ set +x
 - Test(2) Passed
[Pipeline] echo
[INFO] Finished Running Test(2)
[Pipeline] sh
[kube-helm-pipeline@2] Running shell script
+ rm -f /tmp/test_results
[Pipeline] echo
----------------------------------------------------------------------------
[Pipeline] echo
Running Test(3)
[Pipeline] echo
[INFO] Check if nothing established from nginx container
[Pipeline] sh
[kube-helm-pipeline@2] Running shell script
+ set +x
[Pipeline] stage (Test(3) - Validate Results)
Using the ‘stage’ step without a block argument is deprecated
Entering stage Test(3) - Validate Results
Proceeding
[Pipeline] readFile
[Pipeline] echo
[INFO] Test(3) Results(0) == Expected(0)
[Pipeline] sh
[kube-helm-pipeline@2] Running shell script
+ set +x
 - Test(3) Passed
[Pipeline] echo
[INFO] Finished Running Test(3)
[Pipeline] sh
[kube-helm-pipeline@2] Running shell script
+ rm -f /tmp/test_results
[Pipeline] sh
[kube-helm-pipeline@2] Running shell script
+ docker stop 05a19a7c8519a04cc03029d7273a8e2ae3363e74a03b6d4167dfbec0bf6ee978
05a19a7c8519a04cc03029d7273a8e2ae3363e74a03b6d4167dfbec0bf6ee978
+ docker rm -f 05a19a7c8519a04cc03029d7273a8e2ae3363e74a03b6d4167dfbec0bf6ee978
05a19a7c8519a04cc03029d7273a8e2ae3363e74a03b6d4167dfbec0bf6ee978
[Pipeline] echo
----------------------------------------------------------------------------
[Pipeline] stage (Push to DockerHub)
Using the ‘stage’ step without a block argument is deprecated
Entering stage Push to DockerHub
Proceeding
[Pipeline] input
Do you approve to push?
Proceed or Abort
Approved by admin
[Pipeline] sh
[kube-helm-pipeline@2] Running shell script
+ docker tag showerlee/nginx-test:1.6 index.docker.io/showerlee/nginx-test:1.6
[Pipeline] sh
[kube-helm-pipeline@2] Running shell script
+ docker push index.docker.io/showerlee/nginx-test:1.6
The push refers to a repository [docker.io/showerlee/nginx-test]
aedd55edeb74: Preparing
85e9d4859663: Preparing
1d31b5806ba4: Preparing
85e9d4859663: Layer already exists
1d31b5806ba4: Layer already exists
aedd55edeb74: Retrying in 5 seconds
aedd55edeb74: Retrying in 4 seconds
aedd55edeb74: Retrying in 3 seconds
aedd55edeb74: Retrying in 2 seconds
aedd55edeb74: Retrying in 1 second
aedd55edeb74: Pushed
1.6: digest: sha256:8ee214129c880a0d759837daf30c31772797fa6709f11edd9e30db6891710de4 size: 948
[Pipeline] echo
----------------------------------------------------------------------------
[Pipeline] stage (Push properties to git repo)
Using the ‘stage’ step without a block argument is deprecated
Entering stage Push properties to git repo
Proceeding
[Pipeline] echo
Push current build_tag=1.6 to git repo
[Pipeline] withCredentials
[Pipeline] {
[Pipeline] sh
[kube-helm-pipeline@2] Running shell script
+ set +x
git version 1.8.3.1
[master cdbb62c] Update docker tag to 1.6
 2 files changed, 2 insertions(+), 2 deletions(-)
To <a href="https://****:****@github.com/****/Jenkins-Pipeline-CI-CD-with-Helm-on-Kubernetes.git" rel="nofollow">https://****:****@github.com/****/Jenkins-Pipeline-CI-CD-with-Helm-on-Kubernetes.git</a>
   ce4fc25..cdbb62c  master -&gt; master
Branch master set up to track remote branch master from <a href="https://****:****@github.com/****/Jenkins-Pipeline-CI-CD-with-Helm-on-Kubernetes.git" rel="nofollow">https://****:****@github.com/****/Jenkins-Pipeline-CI-CD-with-Helm-on-Kubernetes.git</a>.
[Pipeline] }
[Pipeline] // withCredentials
[Pipeline] echo
----------------------------------------------------------------------------
[Pipeline] }
[Pipeline] // withDockerRegistry
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] stage
[Pipeline] { (helm test)
[Pipeline] echo
[INFO] Start helm test
[Pipeline] echo
[INFO] Run helm chart linter
[Pipeline] sh
[kube-helm-pipeline@2] Running shell script
+ /usr/local/bin/helm lint /var/jenkins_home/workspace/kube-helm-pipeline@2/charts/newegg-nginx
==&gt; Linting /var/jenkins_home/workspace/kube-helm-pipeline@2/charts/newegg-nginx
[INFO] Chart.yaml: icon is recommended

1 chart(s) linted, no failures
[Pipeline] echo
[INFO] Dry-run helm chart installation
[Pipeline] echo
Running dry-run deployment
[Pipeline] sh
[kube-helm-pipeline@2] Running shell script
+ /usr/local/bin/helm upgrade --dry-run --debug --install newegg-nginx /var/jenkins_home/workspace/kube-helm-pipeline@2/charts/newegg-nginx --set ImageTag=1.6,Replicas=3,Cpu=10m,Memory=128Mi,DomainName=newegg-nginx --namespace=newegg-nginx
[debug] Created tunnel using local port: '39460'

[debug] SERVER: "localhost:39460"

Release "newegg-nginx" does not exist. Installing it now.
[debug] CHART PATH: /var/jenkins_home/workspace/kube-helm-pipeline@2/charts/newegg-nginx

NAME:   newegg-nginx
REVISION: 1
RELEASED: Sun Aug 26 01:38:02 2018
CHART: newegg-nginx-1.0.0
USER-SUPPLIED VALUES:
Cpu: 10m
DomainName: newegg-nginx
ImageTag: "1.6"
Memory: 128Mi
Replicas: 3

COMPUTED VALUES:
ContainerPort: 80
Cpu: 10m
DomainName: newegg-nginx
Image: showerlee/nginx-test
ImagePullPolicy: Always
ImageTag: "1.6"
Imagetag: latest
Memory: 128Mi
Replicas: 3
ServicePort: 80
ServiceType: NodePort

HOOKS:
MANIFEST:

---
# Source: newegg-nginx/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
  name: newegg-nginx-newegg-nginx
  labels:    
    app: newegg-nginx-newegg-nginx
    version: 1.0.0
    release: newegg-nginx
spec:
  type: "NodePort"
  ports:
  - port: 80
    targetPort: 80
    protocol: TCP
  selector:
    app: newegg-nginx-newegg-nginx
---
# Source: newegg-nginx/templates/deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: newegg-nginx-newegg-nginx
  labels:    
    app: newegg-nginx-newegg-nginx
    version: 1.0.0
    release: newegg-nginx
spec:
  replicas: 3
  template:
    metadata:
      labels:        
        app: newegg-nginx-newegg-nginx
        version: 1.0.0
        release: newegg-nginx
    spec:
      containers:
        - name: newegg-nginx
          image: "showerlee/nginx-test:1.6"
          imagePullPolicy: "Always"
          ports:
            - containerPort: 80
              protocol: TCP
          resources:
            requests:
              cpu: "10m"
              memory: "128Mi"
---
# Source: newegg-nginx/templates/ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: newegg-nginx-newegg-nginx
  labels:    
    app: newegg-nginx-newegg-nginx
    version: 1.0.0
    release: newegg-nginx
spec:
  rules:
  - host: newegg-nginx.buyabs.corp
    http:
      paths:
      - path: /
        backend:
          serviceName: newegg-nginx-newegg-nginx
          servicePort: 80
[Pipeline] echo
----------------------------------------------------------------------------
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (helm deploy)
[Pipeline] input
Do you approve to deploy?
Proceed or Abort
Approved by admin
[Pipeline] echo
[INFO] Start helm deployment
[Pipeline] echo
Running deployment
[Pipeline] sh
[kube-helm-pipeline@2] Running shell script
+ /usr/local/bin/helm upgrade --install newegg-nginx /var/jenkins_home/workspace/kube-helm-pipeline@2/charts/newegg-nginx --set ImageTag=1.6,Replicas=3,Cpu=10m,Memory=128Mi,DomainName=newegg-nginx --namespace=newegg-nginx
Release "newegg-nginx" does not exist. Installing it now.
NAME:   newegg-nginx
LAST DEPLOYED: Sun Aug 26 01:38:24 2018
NAMESPACE: newegg-nginx
STATUS: DEPLOYED

RESOURCES:
==&gt; v1beta1/Ingress
NAME                       HOSTS                     ADDRESS  PORTS  AGE
newegg-nginx-newegg-nginx  newegg-nginx.buyabs.corp  80       1m

==&gt; v1/Pod(related)
NAME                                        READY  STATUS             RESTARTS  AGE
newegg-nginx-newegg-nginx-56c478c888-6n6rt  0/1    ContainerCreating  0         1m
newegg-nginx-newegg-nginx-56c478c888-hsbcp  0/1    ContainerCreating  0         1m
newegg-nginx-newegg-nginx-56c478c888-vdfgb  0/1    ContainerCreating  0         1m

==&gt; v1/Service
NAME                       TYPE      CLUSTER-IP     EXTERNAL-IP  PORT(S)       AGE
newegg-nginx-newegg-nginx  NodePort  10.97.200.124  &lt;none&gt;       80:32239/TCP  1m

==&gt; v1beta1/Deployment
NAME                       DESIRED  CURRENT  UP-TO-DATE  AVAILABLE  AGE
newegg-nginx-newegg-nginx  3        3        3           0          1m


[Pipeline] echo
Application newegg-nginx successfully deployed. Use helm status newegg-nginx to check
[Pipeline] echo
[INFO] Deployment Finished...
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // node
[Pipeline] }
[Pipeline] // timeout
[Pipeline] End of Pipeline
Finished: SUCCESS</pre>
<p>
	
</p>
<p>
	<span style="color:#337FE5;">7.验证结果</span>
</p>
<pre id="out" class="console-output">
<p>
	<a href="http://www.showerlee.com/archives/2661/web"><img onerror="javascript:this.src='http://www.showerlee.com/wp-content/themes/BYMT/images/images_error.jpg'" src="http://www.showerlee.com/wp-content/uploads/2018/09/web.png" alt="" width="500" height="93" class="alignnone size-full wp-image-2692" title="" align="" /></a> 
</p>
</pre>
<p>
	这样我们就成功的利用Jenkins集成k8s+helm实现将我们的静态网站部署到我们的kubernetes集群中.
</p>
<p>
	
</p>
<p>
	大功告成...
</p>
<p>
	
</p>
<p>
	</p>
<div>声明: 本文采用 <a rel="external" href="http://creativecommons.org/licenses/by-nc-sa/3.0/deed.zh" title="署名-非商业性使用-相同方式共享 3.0 Unported">CC BY-NC-SA 3.0</a> 协议进行授权</div><div>转载请注明来源：<a rel="external" title="DevOps技术分享" href="http://www.showerlee.com/archives/2661">DevOps技术分享</a></div><div>本文链接地址：<a rel="external" title="Jenkins-Pipeline-CI-CD-with-Helm-on-Kubernetes自动化流水线" href="http://www.showerlee.com/archives/2661">http://www.showerlee.com/archives/2661</a></div>]]></content:encoded>
			<wfw:commentRss>http://www.showerlee.com/archives/2661/feed</wfw:commentRss>
		<slash:comments>2</slash:comments>
		</item>
		<item>
		<title>Kubernetes之Helm包管理</title>
		<link>http://www.showerlee.com/archives/2455</link>
		<comments>http://www.showerlee.com/archives/2455#comments</comments>
		<pubDate>Sat, 14 Apr 2018 05:01:17 +0000</pubDate>
		<dc:creator>showerlee</dc:creator>
				<category><![CDATA[DevTools]]></category>
		<category><![CDATA[Kubernetes]]></category>
		<category><![CDATA[helm]]></category>
		<category><![CDATA[k8s]]></category>

		<guid isPermaLink="false">http://www.showerlee.com/?p=2455</guid>
		<description><![CDATA[最近研究了下kubernetes用的比较火的Helm,&#160;Helm作为一个包管理工具, 它把Kuber [&#8230;]]]></description>
				<content:encoded><![CDATA[<p>
	<a href="http://www.showerlee.com/archives/2455/kubernetes-helm"><img onerror="javascript:this.src='http://www.showerlee.com/wp-content/themes/BYMT/images/images_error.jpg'" src="http://www.showerlee.com/wp-content/uploads/2018/04/Kubernetes-Helm.png" alt="Kubernetes-Helm" width="800" height="480" class="alignnone size-full wp-image-2466" /></a>
</p>
<p>
	
</p>
<p>
	最近研究了下kubernetes用的比较火的Helm,&nbsp;Helm作为一个包管理工具, 它把Kubernetes资源(比如deployments、services或 ingress等) 打包到一个chart中，方便我们将其chart保存到chart仓库用来存储和分享, Helm支持发布应用配置的版本管理,&nbsp;<span>使发布可配置, 它最终</span>简化了Kubernetes部署应用的版本控制、打包、发布、删除、更新等操作。
</p>
<p>
	其实Helm和我们的ansible playbook有一些类似的地方就是, 它支持变量预定义, 使我们每一个kube脚本将一些重复的配置使用变量代替, 方便我们对一个project release的管理和批量部署, 升级, 回滚等操作.
</p>
<p>
	
</p>
<p>
	Let's roll out...
</p>
<p>
	
</p>
<p>
	
</p>
<p>
	<span style="color:#337FE5;font-size:16px;"><span style="font-family:Helvetica;background-color:#FFFFFF;"><strong>安装环境</strong></span></span>
</p>
<p>
	Local Desktop: MacOS
</p>
<p>
	Virtual Machine: Virtual Box
</p>
<p>
	Virtual System: CentOS 7.4
</p>
<p>
	Kubernetes: Kubernetes1.9
</p>
<p>
	Docker:&nbsp;17.03.2-ce
</p>
<p>
	Helm:&nbsp;<a href="http://www.showerlee.com/archives/tag/helm" title="查看helm中的全部文章" class="tag_link">helm</a>-v2.7.0
</p>
<p>
	kube-master 10.110.16.10
</p>
<p>
	<span>kube-node-1 10.110.16.11</span>
</p>
<p>
	<span><br />
</span>
</p>
<p>
	<span> </span>
</p>
<p>
	<span style="color:#337FE5;font-family:Helvetica;font-size:16px;background-color:#FFFFFF;"><strong>一. 系统环境配置</strong></span>
</p>
<p>
	<span style="color:#337FE5;">1.</span><span style="color:#337FE5;">关闭SELINUX和firewall</span>
</p>
<p>
	# vi /etc/sysconfig/selinux
</p>
<pre class="prettyprint lang-bsh">...
SELINUX=disabled 
...</pre>
<p><span># setenforce 0</span></p>
<p>
	# systemctl stop firewalld&nbsp; &amp;&amp; systemctl disable firewalld
</p>
<p>
	
</p>
<p>
	<span> </span>
</p>
<p>
	<span style="color:#337FE5;">2</span><span style="color:#337FE5;">.安装<a href="http://www.showerlee.com/archives/tag/k8s" title="查看k8s中的全部文章" class="tag_link">k8s</a>环境.</span>
</p>
<p><a href="http://www.showerlee.com/archives/2200" rel="nofollow">http://www.showerlee.com/archives/2200</a></p>
<p>
	
</p>
<p>
	
</p>
<p>
	<span style="color:#337FE5;font-size:16px;"><strong>二. Helm配置</strong></span>
</p>
<p><span style="color:#337FE5;"></span><span style="color:#337FE5;">1.Helm安装</span> </p>
<p>
	<span># wget <a href="https://storage.googleapis.com/kubernetes-" rel="nofollow">https://storage.googleapis.com/kubernetes-</a><a href="http://www.showerlee.com/archives/tag/helm" title="查看helm中的全部文章" class="tag_link">helm</a>/<a href="http://www.showerlee.com/archives/tag/helm" title="查看helm中的全部文章" class="tag_link">helm</a>-v2.7.0-linux-amd64.tar.gz</span>
</p>
<p>
	<span># tar -zxvf helm-v2.7.0-linux-amd64.tar.gz</span>
</p>
<p>
	<span># mv linux-amd64/helm /usr/local/bin/</span>
</p>
<p>
	<span><br />
</span>
</p>
<p>
	<span style="color:#337FE5;">2.添加tiller到<a href="http://www.showerlee.com/archives/tag/k8s" title="查看k8s中的全部文章" class="tag_link">k8s</a> service account</span>
</p>
<p>
	<span># kubectl create serviceaccount --namespace kube-system tiller</span>
</p>
<p>
	<span># kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller</span>
</p>
<p>
	<span># kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'</span>
</p>
<p>
	<span><br />
</span>
</p>
<p>
	<span><span style="color:#337FE5;">3.</span><span style="color:#337FE5;">使用阿里云tiller镜像以及tiller账户</span><span style="color:#337FE5;">初始化helm, 将tiller部署到<a href="http://www.showerlee.com/archives/tag/k8s" title="查看k8s中的全部文章" class="tag_link">k8s</a> deployment下.</span></span>
</p>
<p>
	<span><span style="color:#337FE5;"><span style="color:#000000;"># vi ~/.helm/repository/repositories.yaml</span><br />
</span></span>
</p>
<p>
	<span><span style="color:#337FE5;"><span style="color:#000000;"><span style="color:#E53333;">Tip: username, password为你的阿里云账号密码</span><br />
</span></span></span>
</p>
<p>
	<span><span style="color:#337FE5;"><span style="color:#000000;"> </span></span></span>
</p>
<pre class="prettyprint">apiVersion: v1
generated: 2018-04-13T23:48:19.490774427-04:00
repositories:
- caFile: ""
  cache: /root/.helm/repository/cache/stable-index.yaml
  certFile: ""
  keyFile: ""
  name: stable
  password: "password"
  url: <a href="https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts" rel="nofollow">https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts</a>
  username: "username"
- caFile: ""
  cache: /root/.helm/repository/cache/local-index.yaml
  certFile: ""
  keyFile: ""
  name: local
  password: ""
  url: <a href="http://127.0.0.1:8879/charts" rel="nofollow">http://127.0.0.1:8879/charts</a>
  username: ""</pre>
<p>
	<span>#&nbsp; helm init --service-account tiller --upgrade --tiller-image=registry.cn-hangzhou.aliyuncs.com/google_containers/tiller:v2.7.0</span>
</p>
<p>
	<span><span style="color:#E53333;">Tip: 这里helm可以理解为一个操作tiller服务的客户端, tiller作为部署到<a href="http://www.showerlee.com/archives/tag/k8s" title="查看k8s中的全部文章" class="tag_link">k8s</a>下的一个deployment, 负责去将我们的chart脚本解析给<a href="http://www.showerlee.com/archives/tag/k8s" title="查看k8s中的全部文章" class="tag_link">k8s</a>去做进一步的部署工作.</span><br />
</span>
</p>
<p>
	<span><br />
<span style="color:#337FE5;">4.检查tiller是否部署到<a href="http://www.showerlee.com/archives/tag/k8s" title="查看k8s中的全部文章" class="tag_link">k8s</a></span></span>
</p>
<p>
	# kubectl get pods --namespace kube-system
</p>
<pre class="prettyprint lang-bsh">NAME                                  READY     STATUS    RESTARTS   AGE
etcd-kube-master                      1/1       Running   0          26d
kube-apiserver-kube-master            1/1       Running   0          26d
kube-controller-manager-kube-master   1/1       Running   1          26d
kube-dns-6f4fd4bdf-54smn              3/3       Running   0          26d
kube-flannel-ds-gwl2z                 1/1       Running   0          26d
kube-flannel-ds-m754s                 1/1       Running   0          26d
kube-proxy-697qx                      1/1       Running   0          26d
kube-proxy-cvfd9                      1/1       Running   0          26d
kube-scheduler-kube-master            1/1       Running   1          26d
tiller-deploy-cf797bfbf-rnk4k         1/1       Running   0          1h</pre>
<p>
	
</p>
<p>
	<span> <span style="color:#337FE5;">5.创建一个chart范例</span></span>
</p>
<p>
	<span># helm create helm-chart</span>
</p>
<p>
	<span># tree ./helm-chart<br />
</span>
</p>
<pre class="prettyprint lang-bsh">./helm-chart
├── charts
├── Chart.yaml
├── templates
│   ├── deployment.yaml
│   ├── _helpers.tpl
│   ├── ingress.yaml
│   ├── NOTES.txt
│   └── service.yaml
└── values.yaml</pre>
<p>
	<span style="color:#E53333;">Tip: 可以看到helm默认创建了一个chart表结构, 这里的templates下面放的大部分为k8s的部署脚本, values.yaml和chart.yaml为主要的参数文件存放一些变量供k8s yaml文件调用, 有需要的小伙伴可以将自己的k8s脚本与默认进行替换.</span>
</p>
<p>
	<span><br />
<span style="color:#337FE5;">6.检查chart语法</span></span>
</p>
<p>
	<span># helm lint ./helm-chart</span>
</p>
<p>
	<span><br />
<span style="color:#337FE5;">7.使用默认chart部署到k8s</span></span>
</p>
<p>
	<span># helm install --name example1 ./helm-chart --set service.type=NodePort</span>
</p>
<p>
	<span><span style="color:#E53333;">Tip: 这里 --name命名我们这个chart release的名称,&nbsp;</span><span style="color:#E53333;">--set service.type=NodePort为将我们的任意node的ip映射到我们部署的pod, 以供访问.</span><span style="color:#E53333;"></span></span>
</p>
<p>
	<span> </span>
</p>
<pre class="prettyprint lang-bsh"># helm install --name example1 ./helm-chart --set service.type=NodePort
NAME:   example1
LAST DEPLOYED: Sat Apr 14 01:08:16 2018
NAMESPACE: default
STATUS: DEPLOYED

RESOURCES:
==&gt; v1/Service
NAME                 TYPE      CLUSTER-IP     EXTERNAL-IP  PORT(S)       AGE
example1-helm-chart  NodePort  10.105.111.66  &lt;none&gt;       80:25146/TCP  0s

==&gt; v1beta1/Deployment
NAME                 DESIRED  CURRENT  UP-TO-DATE  AVAILABLE  AGE
example1-helm-chart  1        1        1           0          0s

==&gt; v1/Pod(related)
NAME                                  READY  STATUS             RESTARTS  AGE
example1-helm-chart-7975cbf9b7-86vx5  0/1    ContainerCreating  0         0s


NOTES:
1. Get the application URL by running these commands:
  export NODE_PORT=$(kubectl get --namespace default -o jsonpath="{.spec.ports[0].nodePort}" services example1-helm-chart)
  export NODE_IP=$(kubectl get nodes --namespace default -o jsonpath="{.items[0].status.addresses[0].address}")
  echo <a href="http://$NODE_IP:$NODE_PORT" rel="nofollow">http://$NODE_IP:$NODE_PORT</a></pre>
<p>我们可以使用上面的NOTES去访问我们的部署网站</p>
<p>
	
</p>
<p>
	<span># curl 10.110.16.10:25146</span>
</p>
<p>
	<span> </span>
</p>
<pre class="prettyprint lang-html">&lt;!DOCTYPE html&gt;
&lt;html&gt;
&lt;head&gt;
&lt;title&gt;Welcome to nginx!&lt;/title&gt;
&lt;style&gt;
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
&lt;/style&gt;
&lt;/head&gt;
&lt;body&gt;
&lt;h1&gt;Welcome to nginx!&lt;/h1&gt;
&lt;p&gt;If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.&lt;/p&gt;

&lt;p&gt;For online documentation and support please refer to
&lt;a href="http://nginx.org/"&gt;nginx.org&lt;/a&gt;.&lt;br/&gt;
Commercial support is available at
&lt;a href="http://nginx.com/"&gt;nginx.com&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Thank you for using nginx.&lt;/em&gt;&lt;/p&gt;
&lt;/body&gt;
&lt;/html&gt;</pre>
<p>
	
</p>
<p>
	<span> </span>
</p>
<p>
	<span style="color:#337FE5;">8.查看当前的部署列表</span>
</p>
<p>
	# helm ls
</p>
<pre class="prettyprint lang-bsh">NAME    	REVISION	UPDATED                 	STATUS  	CHART           	NAMESPACE
example1	1       	Sat Apr 14 01:08:16 2018	DEPLOYED	helm-chart-0.1.0	default</pre>
<p># kubectl get deployment</p>
<pre class="prettyprint lang-bsh">NAME                  DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
example1-helm-chart   1         1         1            1           4m</pre>
<p>
	
</p>
<p>
	<span> <span style="color:#337FE5;">9.打包chart</span></span>
</p>
<p>
	<span># helm package ./helm-chart --debug</span>
</p>
<p>
	<span><br />
<span style="color:#337FE5;">10.使用包去做release部署</span></span>
</p>
<p>
	<span># helm install --name example2 helm-chart-0.1.0.tgz --set service.type=NodePort</span>
</p>
<p>
	
</p>
<p>
	<span> <span style="color:#337FE5;">11.升级当前release</span></span>
</p>
<p>
	<span># helm upgrade example2 ./helm-chart</span>
</p>
<p>
	<span><br />
<span style="color:#337FE5;">12.回滚当前release</span></span>
</p>
<p>
	# helm rollback example2 1
</p>
<p>
	<span><br />
<span style="color:#337FE5;">13.删除该release</span></span>
</p>
<p>
	<span># helm delete example2</span>
</p>
<p>
	<span>#&nbsp;helm del --purge example2</span>
</p>
<p>
	<span><br />
<span style="color:#337FE5;">14.查看release历史删除记录</span></span>
</p>
<p>
	<span><span style="color:#337FE5;"><span style="color:#E53333;">Tip: 如果删除时未使用</span><span style="color:#E53333;">--purge参数可查看删除记录</span></span></span>
</p>
<p>
	<span># helm ls --deleted -d</span>
</p>
<p>
	<span> </span>
</p>
<pre class="prettyprint lang-bsh">NAME    	REVISION	UPDATED                 	STATUS 	CHART           	NAMESPACE
example2	2       	Sat Apr 14 00:14:54 2018	DELETED	helm-chart-0.1.0	default</pre>
<p>
	
</p>
<p>
	这里作者就不继续介绍helm chart的一些语法结构了, 有需要的小伙伴可以直接访问Helm官方去查看相关文档
</p>
<p><a href="https://docs.helm.sh" rel="nofollow">https://docs.helm.sh</a></p>
<p>
	
</p>
<p>
	Finished...</p>
<div>声明: 本文采用 <a rel="external" href="http://creativecommons.org/licenses/by-nc-sa/3.0/deed.zh" title="署名-非商业性使用-相同方式共享 3.0 Unported">CC BY-NC-SA 3.0</a> 协议进行授权</div><div>转载请注明来源：<a rel="external" title="DevOps技术分享" href="http://www.showerlee.com/archives/2455">DevOps技术分享</a></div><div>本文链接地址：<a rel="external" title="Kubernetes之Helm包管理" href="http://www.showerlee.com/archives/2455">http://www.showerlee.com/archives/2455</a></div>]]></content:encoded>
			<wfw:commentRss>http://www.showerlee.com/archives/2455/feed</wfw:commentRss>
		<slash:comments>2</slash:comments>
		</item>
		<item>
		<title>Kubernetes部署WordPress+MySQL</title>
		<link>http://www.showerlee.com/archives/2336</link>
		<comments>http://www.showerlee.com/archives/2336#comments</comments>
		<pubDate>Sat, 24 Feb 2018 09:51:38 +0000</pubDate>
		<dc:creator>showerlee</dc:creator>
				<category><![CDATA[DevTools]]></category>
		<category><![CDATA[Kubernetes]]></category>
		<category><![CDATA[k8s]]></category>

		<guid isPermaLink="false">http://www.showerlee.com/?p=2336</guid>
		<description><![CDATA[这部分我们结合之前的k8s知识点给大家展示如何使用kubernetes部署wordpress+MySQL, 并 [&#8230;]]]></description>
				<content:encoded><![CDATA[<p>
	这部分我们结合之前的<a href="http://www.showerlee.com/archives/tag/k8s" title="查看k8s中的全部文章" class="tag_link">k8s</a>知识点给大家展示如何使用kubernetes部署wordpress+MySQL, 并利用NFS去保存我们容器的源代码以及DB数据.
</p>
<p>
	
</p>
<p style="font-family:Helvetica;font-size:13px;vertical-align:baseline;color:#111111;background-color:#FFFFFF;">
	<span style="font-size:16px;vertical-align:baseline;color:#337FE5;"><span style="vertical-align:baseline;"><strong>安装环境</strong></span></span>
</p>
<p style="font-family:Helvetica;font-size:13px;vertical-align:baseline;color:#111111;background-color:#FFFFFF;">
	System: CentOS 7.4
</p>
<p style="font-family:Helvetica;font-size:13px;vertical-align:baseline;color:#111111;background-color:#FFFFFF;">
	Kubernetes: Kubernetes1.9
</p>
<p style="font-family:Helvetica;font-size:13px;vertical-align:baseline;color:#111111;background-color:#FFFFFF;">
	Docker:&nbsp;17.03.2-ce
</p>
<p style="font-family:Helvetica;font-size:13px;vertical-align:baseline;color:#111111;background-color:#FFFFFF;">
	kube-master 10.110.16.10
</p>
<p style="font-family:Helvetica;font-size:13px;vertical-align:baseline;color:#111111;background-color:#FFFFFF;">
	kube-node-1 10.110.16.11
</p>
<p>
	
</p>
<p>
	<span style="color:#337FE5;font-size:18px;">一. NFS配置:</span>
</p>
<p>
	<span style="color:#337FE5;">1. NFS依赖包安装</span>
</p>
<p>
	<span style="color:#000000;">在Master与Node分别安装NFS组件</span>
</p>
<p>
	#&nbsp;<span style="color:#111111;font-family:Helvetica;font-size:13px;background-color:#FFFFFF;">yum install nfs-utils -y</span>
</p>
<p>
	<span style="color:#111111;font-family:Helvetica;font-size:13px;background-color:#FFFFFF;"><span style="color:#E53333;">Tip: 这里需保证</span><span style="color:#E53333;font-family:Helvetica;font-size:13px;background-color:#FFFFFF;">nfs-utils</span><span style="color:#E53333;">安装到所有master和node中, 否则容器挂载NFS时会报错</span>.</span>
</p>
<p>
	<span style="color:#337FE5;">2. 为Master下mysql data和wordpress源码配置NFS共享目录</span>
</p>
<p>
	<span style="color:#337FE5;"><span style="color:#000000;"># systemctl enable nfs-server &amp;&amp;&nbsp;</span><span style="color:#000000;">systemctl start nfs-server</span><span style="color:#000000;"></span><br />
</span>
</p>
<p>
	# mkdir -p /kube/mysql-db
</p>
<p>
	<span># mkdir -p /kube/wordpress</span>
</p>
<p>
	# chown nfsnobody:nfsnobody /kube/mysql-db
</p>
<p>
	<span># chown nfsnobody:nfsnobody /kube/wordpress</span>
</p>
<p>
	# chmod 755 /kube/mysql-db
</p>
<p>
	<span># chmod 755 /kube/wordpress</span>
</p>
<p>
	# echo -e "/kube/mysql-db&nbsp; &nbsp; kube-*(rw,sync,no_subtree_check,no_root_squash)" &gt; /etc/exports
</p>
<p>
	<span># echo -e "/kube/wordpress&nbsp; &nbsp; kube-*(rw,sync,no_subtree_check,no_root_squash)" &gt;&gt; /etc/exports</span>
</p>
<p>
	<span style="color:#E53333;">Tip: 这里<span>kube-*限制只有kube相关的server才能连接Master下NFS共享目录,&nbsp;</span></span><span><span style="color:#E53333;">no_root_squash参数保证</span><span style="color:#E53333;">wordpress-mysql</span><span style="color:#E53333;"> pod在初始化mysql配置的时候向在其下挂载的/var/lib/mysql目录有写入权限</span></span>
</p>
<p>
	<span style="color:#337FE5;">3.应用配置</span>
</p>
<p>
	# exportfs -a
</p>
<p>
	
</p>
<p>
	<span style="color:#337FE5;font-size:18px;">二.&nbsp;</span><span style="color:#337FE5;font-size:18px;">Persistent volume配置</span>
</p>
<p><span style="color:#337FE5;">1.</span><span style="color:#337FE5;">为mysql data与wordpress源码存储创建Persistent volume</span><br />
# kubectl create -f mysql-pv.yaml</p>
<pre class="prettyprint lang-bsh">apiVersion: v1
kind: PersistentVolume
metadata:
  name: mysql-pv
  labels:
    app: mysql
spec:
  capacity:
    storage: 5Gi
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Recycle
  nfs:
    path: /kube/mysql-db
    server: kube-master</pre>
<p>
	<span style="color:#337FE5;"><span style="color:#000000;"># kubectl create -f wordpress-pv.yaml</span></span>
</p>
<p>
	<span style="color:#337FE5;"><span style="color:#000000;"> </span></span>
</p>
<pre class="prettyprint lang-bsh">apiVersion: v1
kind: PersistentVolume
metadata:
  name: wp-pv
  labels:
    app: wordpress
spec:
  capacity:
    storage: 5Gi
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Recycle
  nfs:
    path: /kube/wordpress
    server: kube-master</pre>
<p>
	<span style="color:#337FE5;">2.创建存放mysql data的PVC</span>
</p>
<p># kubectl create -f mysql-pvc.yaml</p>
<pre class="prettyprint lang-bsh">kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: mysql-pv-claim
  labels:
    app: mysql
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 2Gi</pre>
<p>
	<span style="color:#337FE5;">3.创建存放wordpress源码的PVC</span><br />
<span># kubectl create -f wordpress-pvc.yaml</span>
</p>
<pre class="prettyprint lang-bsh">kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: wp-pv-claim
  labels:
    app: wordpress
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 2Gi</pre>
<p>
	<span style="color:#337FE5;"><span style="color:#000000;">查看绑定</span></span>
</p>
<p>
	<span style="color:#337FE5;"><span style="color:#000000;"># kubectl get pvc<br />
</span></span>
</p>
<pre class="prettyprint">NAME             STATUS    VOLUME     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
mysql-pv-claim   Bound     mysql-pv   5Gi        RWO                           3m
wp-pv-claim      Bound     wp-pv      5Gi        RWO                           6s</pre>
<p>
	<span style="color:#337FE5;font-size:18px;">三. Secret配置</span>
</p>
<p>
	<span style="color:#337FE5;font-size:12px;">1.创建mysql root password</span>
</p>
<p># kubectl create secret generic mysql-pass --from-literal='password=countonme'</p>
<p>
	
</p>
<p>
	<span style="font-size:18px;color:#337FE5;">四. Deployment配置</span>
</p>
<p><span style="color:#337FE5;">1.部署mysql deployment with PVC</span><br />
# kubectl create -f mysql-deployment.yaml</p>
<pre class="prettyprint lang-bsh">apiVersion: apps/v1
kind: Deployment
metadata:
  name: wordpress-mysql
  labels:
    app: wordpress
spec:
  selector:
    matchLabels:
      app: wordpress
      tier: mysql
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: wordpress
        tier: mysql
    spec:
      containers:
      - image: mysql:5.6
        name: mysql
        env:
        - name: MYSQL_ROOT_PASSWORD
          valueFrom:
            secretKeyRef:
              name: mysql-pass
              key: password
        ports:
        - containerPort: 3306
          name: mysql
        volumeMounts:
        - name: mysql-persistent-storage
          mountPath: /var/lib/mysql
      volumes:
      - name: mysql-persistent-storage
        persistentVolumeClaim:
          claimName: mysql-pv-claim</pre>
<p>
	<span style="color:#337FE5;">2.部署wordpress deployment with PVC</span>
</p>
<p>
	<span># kubectl create -f wordpress-deployment.yaml</span>
</p>
<pre class="prettyprint">apiVersion: apps/v1
kind: Deployment
metadata:
  name: wordpress
  labels:
    app: wordpress
spec:
  selector:
    matchLabels:
      app: wordpress
      tier: frontend
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: wordpress
        tier: frontend
    spec:
      containers:
      - image: wordpress:4.8-apache
        name: wordpress
        env:
        - name: WORDPRESS_DB_HOST
          value: wordpress-mysql
        - name: WORDPRESS_DB_PASSWORD
          valueFrom:
            secretKeyRef:
              name: mysql-pass
              key: password
        ports:
        - containerPort: 80
          name: wordpress
        volumeMounts:
        - name: wordpress-persistent-storage
          mountPath: /var/www/html
      volumes:
      - name: wordpress-persistent-storage
        persistentVolumeClaim:
          claimName: wp-pv-claim</pre>
<p>
	<span style="color:#337FE5;">3.Service配置</span>
</p>
<p>
	<span style="color:#E53333;">Tip: 这里我们开启了node IP的80端口的外部访问权限, 可以方便我们直接利用主机去访问虚拟机任意Node地址从而登录我们的Wordpress网站.</span>
</p>
<p>
	# kubectl create -f wp-svc.yaml
</p>
<pre class="prettyprint">apiVersion: v1
kind: Service
metadata:
&nbsp; name: wordpress-mysql
&nbsp; labels:
&nbsp; &nbsp; app: wordpress
spec:
&nbsp; ports:
&nbsp; &nbsp; - port: 3306
&nbsp; selector:
&nbsp; &nbsp; app: wordpress
&nbsp; &nbsp; tier: mysql
&nbsp; clusterIP: None
---
apiVersion: v1
kind: Service
metadata:
&nbsp; name: wordpress
&nbsp; labels:
&nbsp; &nbsp; app: wordpress
spec:
&nbsp; ports:
&nbsp; &nbsp; - port: 80
&nbsp; &nbsp; &nbsp; nodePort: 80
&nbsp; selector:
&nbsp; &nbsp; app: wordpress
&nbsp; &nbsp; tier: frontend
&nbsp; type: NodePort
</pre>
<p>
	
</p>
<p>
	<span style="color:#E53333;">Tip: 这里service定义的<strong>name: wordpress-mysql</strong>保证我们wordpress-deployment.yaml定义的如下环境变量可以作为有效的域名成功去访问我们的mysql容器, 保证网站服务器与数据库服务器的通讯.</span>
</p>
<pre class="prettyprint">env:
        - name: WORDPRESS_DB_HOST
          value: wordpress-mysql</pre>
<p>
	
</p>
<p>
	<span style="color:#337FE5;font-size:18px;">五. 验证<span style="color:#337FE5;font-size:18px;">结果</span></span>
</p>
<p>
	<span style="color:#337FE5;">1.访问wordpress主页</span>
</p>
<p>
	这里我们可以直接在浏览器访问任意node的IP地址从而进入wordpress主页
</p>
<p>
	<a href="http://www.showerlee.com/archives/2336/wordpress01"><img onerror="javascript:this.src='http://www.showerlee.com/wp-content/themes/BYMT/images/images_error.jpg'" src="http://www.showerlee.com/wp-content/uploads/2018/02/wordpress01.png" alt="wordpress01" width="845" height="768" class="alignnone size-full wp-image-2394" /></a>
</p>
<p>
	
</p>
<p>
	添加相关信息并初始化安装WordPress
</p>
<p>
	<a href="http://www.showerlee.com/archives/2336/wordpress02"><img onerror="javascript:this.src='http://www.showerlee.com/wp-content/themes/BYMT/images/images_error.jpg'" src="http://www.showerlee.com/wp-content/uploads/2018/02/wordpress02.png" alt="wordpress02" width="845" height="768" class="alignnone size-full wp-image-2395" /></a>
</p>
<p>
	完成安装
</p>
<p>
	<a href="http://www.showerlee.com/archives/2336/wordpress03-2"><img onerror="javascript:this.src='http://www.showerlee.com/wp-content/themes/BYMT/images/images_error.jpg'" src="http://www.showerlee.com/wp-content/uploads/2018/02/wordpress031.png" alt="wordpress03" width="1024" height="602" class="alignnone size-full wp-image-2399" /></a>
</p>
<p>
	后台
</p>
<p>
	
</p>
<p>
	<a href="http://www.showerlee.com/archives/2336/wordpress04"><img onerror="javascript:this.src='http://www.showerlee.com/wp-content/themes/BYMT/images/images_error.jpg'" src="http://www.showerlee.com/wp-content/uploads/2018/02/wordpress04.png" alt="wordpress04" width="1280" height="731" class="alignnone size-full wp-image-2400" /></a>
</p>
<p>
	主页
</p>
<p>
	<a href="http://www.showerlee.com/archives/2336/wordpress05"><img onerror="javascript:this.src='http://www.showerlee.com/wp-content/themes/BYMT/images/images_error.jpg'" src="http://www.showerlee.com/wp-content/uploads/2018/02/wordpress05.png" alt="wordpress05" width="1024" height="584" class="alignnone size-large wp-image-2401" /></a>
</p>
<p>
	
</p>
<p>
	<span style="color:#337FE5;">2.查看NFS主机对在容器里的mysql data与wordpress root dir的目录挂载.</span>
</p>
<p>
	<span style="color:#337FE5;"><a href="http://www.showerlee.com/archives/2336/wordpress07"><img onerror="javascript:this.src='http://www.showerlee.com/wp-content/themes/BYMT/images/images_error.jpg'" src="http://www.showerlee.com/wp-content/uploads/2018/02/wordpress07.png" alt="wordpress07" width="1024" height="249" class="alignnone size-full wp-image-2413" /></a><br />
</span>
</p>
<p>
	<a href="http://www.showerlee.com/archives/2336/wordpress06"><img onerror="javascript:this.src='http://www.showerlee.com/wp-content/themes/BYMT/images/images_error.jpg'" src="http://www.showerlee.com/wp-content/uploads/2018/02/wordpress06.png" alt="wordpress06" width="972" height="768" class="alignnone size-full wp-image-2410" /></a>
</p>
<p>
	
</p>
<p>
	
</p>
<p>
	有兴趣的小伙伴可以直接访问如下代码仓库去下载本文相关代码:
</p>
<p><a href="https://git.showerlee.com/showerlee/kube-deploy/tree/master/wordpress-mysql" rel="nofollow">https://git.showerlee.com/showerlee/kube-deploy/tree/master/wordpress-mysql</a></p>
<p>
	
</p>
<p>
	Finished...
</p>
<p>
	
</p>
<p>
	后记:
</p>
<p>
	如果我们使用helm包管理去部署wordpress, 将大大简化我们的工作量.
</p>
<p>
	这里我的代码仓库提供了wordpress chart部署脚本, 以下是详细的部署步骤:
</p>
<p>
	<span style="color:#337FE5;font-size:12px;">Prerequisite:</span>
</p>
<p>
	Kubernetes cluster setup
</p>
<p><a href="http://www.showerlee.com/archives/2200" rel="nofollow">http://www.showerlee.com/archives/2200</a></p>
<p>
	Helm setup
</p>
<p><a href="http://www.showerlee.com/archives/2455" rel="nofollow">http://www.showerlee.com/archives/2455</a></p>
<p>
	
</p>
<p>
	<span style="color:#337FE5;">Helm deployment:</span>
</p>
<p>
	# git clone&nbsp;git@git.showerlee.com:showerlee/kube-deploy.git
</p>
<p>
	# cd kube-deploy
</p>
<p>
	#&nbsp;kubectl create secret generic mysql-pass --from-literal='password=countonme'
</p>
<p>
	#&nbsp;helm install --name wordpress-mysql ./wordpress-helm-chart --set service.type=NodePort
</p>
<pre class="prettyprint lang-bsh">NAME:   wordpress-mysql
LAST DEPLOYED: Sat Apr 14 03:09:46 2018
NAMESPACE: default
STATUS: DEPLOYED

RESOURCES:
==&gt; v1/PersistentVolume
NAME      CAPACITY  ACCESS MODES  RECLAIM POLICY  STATUS  CLAIM                   STORAGECLASS  REASON  AGE
mysql-pv  5Gi       RWO           Recycle         Bound   default/mysql-pv-claim  1s
wp-pv     5Gi       RWO           Recycle         Bound   default/wp-pv-claim     1s

==&gt; v1/PersistentVolumeClaim
NAME            STATUS  VOLUME    CAPACITY  ACCESS MODES  STORAGECLASS  AGE
mysql-pv-claim  Bound   mysql-pv  5Gi       RWO           1s
wp-pv-claim     Bound   wp-pv     5Gi       RWO           1s

==&gt; v1/Service
NAME             TYPE       CLUSTER-IP     EXTERNAL-IP  PORT(S)    AGE
wordpress-mysql  ClusterIP  None           &lt;none&gt;       3306/TCP   1s
wordpress        NodePort   10.110.14.233  &lt;none&gt;       80:80/TCP  1s

==&gt; v1/Deployment
NAME             AGE
wordpress-mysql  1s
wordpress        1s


NOTES:
1. Get the application URL by running these commands:
  export NODE_PORT=$(kubectl get --namespace default -o jsonpath="{.spec.ports[0].nodePort}" services wordpress-mysql-wordpress-helm-chart)
  export NODE_IP=$(kubectl get nodes --namespace default -o jsonpath="{.items[0].status.addresses[0].address}")
  echo <a href="http://$NODE_IP:$NODE_PORT" rel="nofollow">http://$NODE_IP:$NODE_PORT</a></pre>
<p>
	
</p>
<p>
	</p>
<div>声明: 本文采用 <a rel="external" href="http://creativecommons.org/licenses/by-nc-sa/3.0/deed.zh" title="署名-非商业性使用-相同方式共享 3.0 Unported">CC BY-NC-SA 3.0</a> 协议进行授权</div><div>转载请注明来源：<a rel="external" title="DevOps技术分享" href="http://www.showerlee.com/archives/2336">DevOps技术分享</a></div><div>本文链接地址：<a rel="external" title="Kubernetes部署WordPress+MySQL" href="http://www.showerlee.com/archives/2336">http://www.showerlee.com/archives/2336</a></div>]]></content:encoded>
			<wfw:commentRss>http://www.showerlee.com/archives/2336/feed</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Kubernetes之Secrets与Config Maps</title>
		<link>http://www.showerlee.com/archives/2308</link>
		<comments>http://www.showerlee.com/archives/2308#comments</comments>
		<pubDate>Sat, 24 Feb 2018 04:11:00 +0000</pubDate>
		<dc:creator>showerlee</dc:creator>
				<category><![CDATA[DevTools]]></category>
		<category><![CDATA[Kubernetes]]></category>
		<category><![CDATA[k8s]]></category>

		<guid isPermaLink="false">http://www.showerlee.com/?p=2308</guid>
		<description><![CDATA[Secrets Secrets是一个包含敏感数据的对象，例如我们常用的密码，令牌或密钥等,&#160; 我们编 [&#8230;]]]></description>
				<content:encoded><![CDATA[<p>
	<span style="font-size:24px;color:#337FE5;"><strong>Secrets</strong></span><span style="font-size:24px;color:#337FE5;"><strong></strong></span>
</p>
<p>
	Secrets是一个包含敏感数据的对象，例如我们常用的密码，令牌或密钥等,&nbsp; 我们编写yaml如果直接明文这些信息则会将我们的敏感信息暴漏在我们的脚本中; 所以将其放置在Secret对象中可以更好地控制它的使用方式，并降低意外暴露的风险。
</p>
<p>
	Pod可以引用我们事先创建好的<span>Secrets键值对到环境变量, 通过获取环境变量的键值对动态更新我们Pod的环境配置,&nbsp;</span>从而实现动态配置更新.
</p>
<p>
	<span style="color:#337FE5;">1. 创建一个secret</span>
</p>
<p>
	# kubectl create secret generic secret-demo --from-literal='password=countonme'
</p>
<p>
	<span style="color:#337FE5;">2. 查看创建好的secret</span>
</p>
<p>
	# kubectl get secret secret-demo
</p>
<pre class="prettyprint lang-bsh">NAME          TYPE      DATA      AGE
secret-demo   Opaque    1         13s</pre>
<p>
	<span style="color:#337FE5;">3.创建一个Pod并引用这个secret</span>
</p>
<p>
	#&nbsp;vi secret-env-pod.yaml
</p>
<pre class="prettyprint">apiVersion: v1
kind: Pod
metadata:
  name: httpd-pod
spec:
  containers:
  - image: httpd
    name: httpd
    imagePullPolicy: Always
    env:
    - name: PASSWORD
      valueFrom:
        secretKeyRef:
          name: secret-demo
          key: password</pre>
<p>
	# kubectl create -f secret-env-pod.yaml
</p>
<p>
	<span style="color:#337FE5;">4.查看secret</span>
</p>
<p>
	<span># kubectl describe secret</span><span></span>
</p>
<p>
	<span style="color:#337FE5;">5.查看变量是否引入Pod</span>
</p>
<p>
	# kubectl exec -ti httpd-pod env
</p>
<pre class="prettyprint lang-bsh">PATH=/usr/local/apache2/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=httpd-pod
TERM=xterm
PASSWORD=countonme
KUBERNETES_SERVICE_PORT_HTTPS=443
KUBERNETES_PORT=tcp://10.96.0.1:443
KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443
KUBERNETES_PORT_443_TCP_PROTO=tcp
KUBERNETES_PORT_443_TCP_PORT=443
KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1
KUBERNETES_SERVICE_HOST=10.96.0.1
KUBERNETES_SERVICE_PORT=443
HTTPD_PREFIX=/usr/local/apache2
NGHTTP2_VERSION=1.18.1-1
OPENSSL_VERSION=1.0.2l-1~bpo8+1
HTTPD_VERSION=2.4.29
HTTPD_SHA256=777753a5a25568a2a27428b2214980564bc1c38c1abf9ccc7630b639991f7f00
HTTPD_PATCHES=
APACHE_DIST_URLS=https://www.apache.org/dyn/closer.cgi?action=download&amp;filename=
HOME=/root</pre>
<p>可以看到Pod的环境变量里已经引入一组键值对PASSWORD=countonme</p>
<p>
	<span style="color:#337FE5;">6.向Pod挂载目录写入secret文件.</span>
</p>
<p>
	# vi secret-vol-pod.yaml
</p>
<pre class="prettyprint lang-bsh">apiVersion: v1
kind: Pod
metadata:
  name: httpd-pod-secret-vol
spec:
  containers:
  - image: httpd
    name: httpd
    imagePullPolicy: Always
    volumeMounts:
    - name: secret
      mountPath: "/mnt"
      readOnly: true
  volumes:
  - name: secret
    secret:
      secretName: secret-demo</pre>
<p># kubectl create -f secret-vol-pod.yaml</p>
<p>
	# kubectl exec -it httpd-pod-secret-vol cat /mnt/password
</p>
<pre class="prettyprint lang-bsh">countonme</pre>
<p>可以看到该Pod下面有一个文件名为password, 内容为countonme的文本文件.&nbsp;</p>
<p>
	
</p>
<p>
	
</p>
<p>
	<span style="color:#337FE5;font-size:24px;"><strong>Config Map</strong></span>
</p>
<p>
	<span style="color:#337FE5;">1.创建config map</span>
</p>
<p>
	# vi cfgmap-demo.yaml
</p>
<pre class="prettyprint">apiVersion: v1
data:
  database: db.example.com
  db_port: "3306"
  http_url: <a href="http://www.example.com" rel="nofollow">http://www.example.com</a>
kind: ConfigMap
metadata:
  name: cfgmap-demo</pre>
<p>
	#&nbsp;kubectl create -f cfgmap-demo.yaml
</p>
<p>
	<br />
<span style="color:#337FE5;">2.查看config map</span>
</p>
<p>
	# kubectl get configmap cfgmap-demo -o yaml
</p>
<pre class="prettyprint lang-bsh">apiVersion: v1
data:
  database: db.example.com
  db_port: "3306"
  http_url: <a href="http://www.example.com" rel="nofollow">http://www.example.com</a>
kind: ConfigMap
metadata:
  creationTimestamp: 2018-02-24T07:11:01Z
  name: cfgmap-demo
  namespace: default
  resourceVersion: "1064654"
  selfLink: /api/v1/namespaces/default/configmaps/cfgmap-demo
  uid: de9248d1-1931-11e8-9e24-00163e0e24bf</pre>
<p>
	<span style="color:#337FE5;">3. 修改config map</span>
</p>
<p>
	# vi cfgmap-demo.yaml
</p>
<p>
	添加一行键值对
</p>
<pre class="prettyprint">apiVersion: v1
data:
  database: db.example.com
  db_port: "3306"
  http_url: <a href="http://www.example.com" rel="nofollow">http://www.example.com</a>
  http_port: "80"
kind: ConfigMap
metadata:
  name: cfgmap-demo</pre>
<p>
	更新config map
</p>
<p>
	#&nbsp;kubectl replace -f cfgmap-demo.yaml
</p>
<p>
	查看更新后的config map
</p>
<p>
	# kubectl get configmap cfgmap-demo -o yaml
</p>
<pre class="prettyprint">apiVersion: v1
data:
  database: db.example.com
  db_port: "3306"
  http_port: "80"
  http_url: <a href="http://www.example.com" rel="nofollow">http://www.example.com</a>
kind: ConfigMap
metadata:
  creationTimestamp: 2018-02-24T07:11:01Z
  name: cfgmap-demo
  namespace: default
  resourceVersion: "1065520"
  selfLink: /api/v1/namespaces/default/configmaps/cfgmap-demo
  uid: de9248d1-1931-11e8-9e24-00163e0e24bf
</pre>
<p>
	<span style="color:#337FE5;">4.创建一个Pod并引用这个config map</span>
</p>
<p>
	<span style="color:#000000;">#&nbsp;vi cfgmap-env-pod.yaml</span>
</p>
<p>
	
</p>
<pre class="prettyprint">apiVersion: v1
kind: Pod
metadata:
  name: cfgmap-httpd-pod
spec:
  containers:
  - image: httpd
    name: httpd
    imagePullPolicy: Always
    envFrom:
    - configMapRef:
        name: cfgmap-demo</pre>
<p>
	# kubectl create -f cfgmap-env-pod.yaml
</p>
<p>
	<span style="color:#337FE5;">5.查看config map的键值对是否引入Pod</span>
</p>
<p>
	# kubectl exec -ti cfgmap-httpd-pod env
</p>
<pre class="prettyprint">PATH=/usr/local/apache2/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=cfgmap-httpd-pod
TERM=xterm
db_port=3306
http_port=80
http_url=http://www.example.com
database=db.example.com
KUBERNETES_SERVICE_PORT_HTTPS=443
KUBERNETES_PORT=tcp://10.96.0.1:443
KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443
KUBERNETES_PORT_443_TCP_PROTO=tcp
KUBERNETES_PORT_443_TCP_PORT=443
KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1
KUBERNETES_SERVICE_HOST=10.96.0.1
KUBERNETES_SERVICE_PORT=443
HTTPD_PREFIX=/usr/local/apache2
NGHTTP2_VERSION=1.18.1-1
OPENSSL_VERSION=1.0.2l-1~bpo8+1
HTTPD_VERSION=2.4.29
HTTPD_SHA256=777753a5a25568a2a27428b2214980564bc1c38c1abf9ccc7630b639991f7f00
HTTPD_PATCHES=
APACHE_DIST_URLS=https://www.apache.org/dyn/closer.cgi?action=download&amp;filename=
HOME=/root</pre>
<p>
	可以看到我们Config map下的所有键值对已经成功引入Pod环境变量.
</p>
<p>
	相关代码:
</p>
<p><a href="https://git.showerlee.com/showerlee/kube-deploy" rel="nofollow">https://git.showerlee.com/showerlee/kube-deploy</a></p>
<p>
	
</p>
<p>
	Finished...</p>
<div>声明: 本文采用 <a rel="external" href="http://creativecommons.org/licenses/by-nc-sa/3.0/deed.zh" title="署名-非商业性使用-相同方式共享 3.0 Unported">CC BY-NC-SA 3.0</a> 协议进行授权</div><div>转载请注明来源：<a rel="external" title="DevOps技术分享" href="http://www.showerlee.com/archives/2308">DevOps技术分享</a></div><div>本文链接地址：<a rel="external" title="Kubernetes之Secrets与Config Maps" href="http://www.showerlee.com/archives/2308">http://www.showerlee.com/archives/2308</a></div>]]></content:encoded>
			<wfw:commentRss>http://www.showerlee.com/archives/2308/feed</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Kubernetes之Persistent Volume(持久化卷)</title>
		<link>http://www.showerlee.com/archives/2280</link>
		<comments>http://www.showerlee.com/archives/2280#comments</comments>
		<pubDate>Fri, 23 Feb 2018 08:12:49 +0000</pubDate>
		<dc:creator>showerlee</dc:creator>
				<category><![CDATA[DevTools]]></category>
		<category><![CDATA[Kubernetes]]></category>
		<category><![CDATA[k8s]]></category>

		<guid isPermaLink="false">http://www.showerlee.com/?p=2280</guid>
		<description><![CDATA[Persistent Volume(持久化卷)简称PV, 是一个K8S资源对象，我们可以单独创建一个PV, 它 [&#8230;]]]></description>
				<content:encoded><![CDATA[<p>
	Persistent Volume(持久化卷)简称PV, 是一个K8S资源对象，我们可以单独创建一个PV, 它不和Pod直接发生关系, 而是通过Persistent Volume Claim, 简称PVC来实现动态绑定, 我们会在Pod定义里指定创建好的PVC, 然后PVC会根据Pod的要求去自动绑定合适的PV给Pod使用.
</p>
<p>
	
</p>
<p>
	
</p>
<p>
	持久化卷下PV和PVC概念:
</p>
<p>
	Persistent Volume（PV）是由管理员设置的存储，它是群集的一部分。就像节点是集群中的资源一样，PV 也是集群中的资源。 PV 是 Volume 之类的卷插件，但具有独立于使用 PV 的 Pod 的生命周期。此 API 对象包含存储实现的细节，即 NFS、iSCSI 或特定于云供应商的存储系统
</p>
<p>
	PersistentVolumeClaim（PVC）是用户存储的请求。它与 Pod 相似。Pod 消耗节点资源，PVC 消耗 PV 资源。Pod 可以请求特定级别的资源（CPU 和内存）。PVC声明可以请求特定的大小和访问模式（例如，可以以读/写一次或 只读多次模式挂载）
</p>
<p>
	
</p>
<p style="font-family:Helvetica, Arial, &quot;font-size:16px;vertical-align:baseline;color:#555555;background-color:#FFFFFF;">
	它和普通Volume的区别是什么呢？
</p>
<p style="font-family:Helvetica, Arial, &quot;font-size:16px;vertical-align:baseline;color:#555555;background-color:#FFFFFF;">
	普通Volume和使用它的Pod之间是一种静态绑定关系，在定义Pod的文件里，同时定义了它使用的Volume。Volume是Pod的附属品，我们无法单独创建一个Volume，因为它不是一个独立的K8S资源对象。
</p>
<p style="font-family:Helvetica, Arial, &quot;font-size:16px;vertical-align:baseline;color:#555555;background-color:#FFFFFF;">
	
</p>
<p style="font-family:Helvetica, Arial, &quot;font-size:16px;vertical-align:baseline;color:#555555;background-color:#FFFFFF;">
	如何简单理解持久化卷?
</p>
<p style="font-family:Helvetica, Arial, &quot;font-size:16px;vertical-align:baseline;color:#555555;background-color:#FFFFFF;">
	我们需要首先创建一个独立的持久化卷(PV)资源对象, 然后创建一个与PV绑定的PVC存储请求, 这个请求会事先定义accessModes,&nbsp;resources等资源配置, 最终我们会在Pod中挂载定义好的PVC以供我们数据存储使用
</p>
<p style="font-family:Helvetica, Arial, &quot;font-size:16px;vertical-align:baseline;color:#555555;background-color:#FFFFFF;">
	
</p>
<p style="font-family:Helvetica, Arial, &quot;font-size:16px;vertical-align:baseline;color:#555555;background-color:#FFFFFF;">
	Let's start...
</p>
<p style="font-family:Helvetica, Arial, &quot;font-size:16px;vertical-align:baseline;color:#555555;background-color:#FFFFFF;">
	
</p>
<p style="font-family:Helvetica, Arial, &quot;font-size:16px;vertical-align:baseline;color:#555555;background-color:#FFFFFF;">
	<span style="font-size:18px;color:#337FE5;"><strong>一. NFS安装配置</strong></span>
</p>
<p style="font-family:Helvetica, Arial, &quot;font-size:16px;vertical-align:baseline;color:#555555;background-color:#FFFFFF;">
	我们这里利用NFS去实现<a href="http://www.showerlee.com/archives/tag/k8s" title="查看k8s中的全部文章" class="tag_link">k8s</a>持久化卷的配置
</p>
<p style="font-family:Helvetica, Arial, &quot;font-size:16px;vertical-align:baseline;color:#555555;background-color:#FFFFFF;">
	<span style="color:#337FE5;">1,安装NFS server</span>
</p>
<p style="font-family:Helvetica, Arial, &quot;font-size:16px;vertical-align:baseline;color:#555555;background-color:#FFFFFF;">
	# yum install nfs-utils -y
</p>
<p style="font-family:Helvetica, Arial, &quot;font-size:16px;vertical-align:baseline;color:#555555;background-color:#FFFFFF;">
	<span style="color:#337FE5;">2.启动NFS服务</span>
</p>
<p style="font-family:Helvetica, Arial, &quot;font-size:16px;vertical-align:baseline;color:#555555;background-color:#FFFFFF;">
	# systemctl enable nfs-server
</p>
<p style="font-family:Helvetica, Arial, &quot;font-size:16px;vertical-align:baseline;color:#555555;background-color:#FFFFFF;">
	# systemctl start nfs-server
</p>
<p style="font-family:Helvetica, Arial, &quot;font-size:16px;vertical-align:baseline;color:#555555;background-color:#FFFFFF;">
	<span style="color:#337FE5;">3.配置NFS共享目录</span>
</p>
<p style="font-family:Helvetica, Arial, &quot;font-size:16px;vertical-align:baseline;color:#555555;background-color:#FFFFFF;">
	# mkdir /srv/pv-demo
</p>
<p style="font-family:Helvetica, Arial, &quot;font-size:16px;vertical-align:baseline;color:#555555;background-color:#FFFFFF;">
	# chown nfsnobody:nfsnobody /srv/pv-demo
</p>
<p style="font-family:Helvetica, Arial, &quot;font-size:16px;vertical-align:baseline;color:#555555;background-color:#FFFFFF;">
	# chmod 755 /srv/pv-demo
</p>
<p style="font-family:Helvetica, Arial, &quot;font-size:16px;vertical-align:baseline;color:#555555;background-color:#FFFFFF;">
	# echo -e "/srv/pv-demo&nbsp; &nbsp; kube-master(rw,sync)" &gt; /etc/export
</p>
<p style="font-family:Helvetica, Arial, &quot;font-size:16px;vertical-align:baseline;color:#555555;background-color:#FFFFFF;">
	<span style="color:#337FE5;">4.生效共享目录</span>
</p>
<p style="font-family:Helvetica, Arial, &quot;font-size:16px;vertical-align:baseline;color:#555555;background-color:#FFFFFF;">
	# exportfs -a
</p>
<p style="font-family:Helvetica, Arial, &quot;font-size:16px;vertical-align:baseline;color:#555555;background-color:#FFFFFF;">
	<span style="color:#000000;">因为资源有限, 我们最终在Master上创建一个NFS共享目录</span><span style="font-family:Helvetica, Arial, &quot;font-size:16px;vertical-align:baseline;color:#000000;background-color:#FFFFFF;">/srv/pv-demo, 以供我们后面的持久化卷使用, 有富裕的小伙伴可以创建一台与kube-master同一网段的独立server去充当NFS服务器,&nbsp;</span>
</p>
<p style="font-family:Helvetica, Arial, &quot;font-size:16px;vertical-align:baseline;color:#555555;background-color:#FFFFFF;">
	
</p>
<p style="font-family:Helvetica, Arial, &quot;font-size:16px;vertical-align:baseline;color:#555555;background-color:#FFFFFF;">
	<span style="font-size:18px;color:#337FE5;"><strong>二.&nbsp;Persistent Volume配置</strong></span>
</p>
<p style="font-family:Helvetica, Arial, &quot;font-size:16px;vertical-align:baseline;color:#555555;background-color:#FFFFFF;">
	<span style="color:#337FE5;">1.创建Persistent Volume</span>
</p>
<p style="font-family:Helvetica, Arial, &quot;font-size:16px;vertical-align:baseline;color:#555555;background-color:#FFFFFF;">
	#&nbsp;vi pv.yaml
</p>
<pre class="prettyprint">apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv-demo
spec:
  capacity:
    storage: 5Gi
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Recycle
  nfs:
    path: /srv/pv-demo
    server: kube-master</pre>
<p>
	<span style="color:#E53333;">Tip: 这里的定义卷的大小为5G, 使用的accessmodes为ReadWriteOnce, PVC policy为Recycle, NFS共享目录为我们之前在master创建好的</span><span style="color:#E53333;">/srv/pv-demo, server为我们定义好的本地host kube-master</span>
</p>
<p>
	# kubectl create -f pv.yaml
</p>
<p style="font-family:Helvetica, Arial, &quot;font-size:16px;vertical-align:baseline;color:#555555;background-color:#FFFFFF;">
	<span style="color:#337FE5;">2. 查看PV</span>
</p>
<p style="font-family:Helvetica, Arial, &quot;font-size:16px;vertical-align:baseline;color:#555555;background-color:#FFFFFF;">
	# kubectl get pv pv-demo
</p>
<pre class="prettyprint">NAME      CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS    CLAIM              STORAGECLASS   REASON    AGE
pv-demo   5Gi        RWO            Recycle          Bound     default/pvc-demo                            1h</pre>
<p><span style="color:#337FE5;">3.创建Persistent Volume Claim</span> </p>
<p style="font-family:Helvetica, Arial, &quot;font-size:16px;vertical-align:baseline;color:#555555;background-color:#FFFFFF;">
	<span style="color:#E53333;">Tip: 这里PVC可以理解为在PV请求的资源, 也就是说所有我们的数据都会保存在PVC里, 任何PVC的删除操作都会清除我们存储在这里的数据.</span>
</p>
<pre class="prettyprint">kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: pvc-demo
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 2Gi</pre>
<p style="font-family:Helvetica, Arial, &quot;font-size:16px;vertical-align:baseline;color:#555555;background-color:#FFFFFF;">
	# kubectl create -f pvc.yaml
</p>
<p style="font-family:Helvetica, Arial, &quot;font-size:16px;vertical-align:baseline;color:#555555;background-color:#FFFFFF;">
	<span style="color:#337FE5;">4.查看PVC</span>
</p>
<p style="font-family:Helvetica, Arial, &quot;font-size:16px;vertical-align:baseline;color:#555555;background-color:#FFFFFF;">
	# kubectl get pvc pvc-demo
</p>
<pre class="prettyprint">NAME       STATUS    VOLUME    CAPACITY   ACCESS MODES   STORAGECLASS   AGE
pvc-demo   Bound     pv-demo   5Gi        RWO                           1h
</pre>
<p><span style="color:#337FE5;">5.创建一个Pod并使用该PVC</span> </p>
<p style="font-family:Helvetica, Arial, &quot;font-size:16px;vertical-align:baseline;color:#555555;background-color:#FFFFFF;">
	# vi pvpod.yaml
</p>
<pre class="prettyprint">apiVersion: v1
kind: Pod
metadata:
  name: httpd-pod
spec:
  containers:
  - image: httpd
    name: httpd-pod
    imagePullPolicy: Always
    volumeMounts:
    - mountPath: "/usr/local/apache2/htdocs/"
      name: httpd-volume
  volumes:
    - name: httpd-volume
      persistentVolumeClaim:
        claimName: pvc-demo</pre>
<p>
	<span style="color:#E53333;"><span style="color:#E53333;">Tip: 这里需要保证claimName的值与我们之前创建的PVC name一致.</span></span>
</p>
<p>
	<span style="color:#E53333;"><span style="color:#000000;"># kubectl create -f pvpod.yaml</span><span style="color:#000000;"></span><br />
</span>
</p>
<p>
	<span style="color:#E53333;">Tip: 这里我们将PVC挂载到Pod的Apache根目录"</span><span style="color:#E53333;">/usr/local/apache2/htdocs/", 用来最终测试效果.</span>
</p>
<p style="font-family:Helvetica, Arial, &quot;font-size:16px;vertical-align:baseline;color:#555555;background-color:#FFFFFF;">
	<span style="color:#337FE5;">6. 查看Pod是否挂载PVC</span>
</p>
<p style="font-family:Helvetica, Arial, &quot;font-size:16px;vertical-align:baseline;color:#555555;background-color:#FFFFFF;">
	# kubectl describe pv
</p>
<pre class="prettyprint">Name:            pv-demo
Labels:          &lt;none&gt;
Annotations:     pv.kubernetes.io/bound-by-controller=yes
StorageClass:
Status:          Bound
Claim:           default/pvc-demo
Reclaim Policy:  Recycle
Access Modes:    RWO
Capacity:        5Gi
Message:
Source:
    Type:      NFS (an NFS mount that lasts the lifetime of a pod)
    Server:    kube-master
    Path:      /srv/pv-demo
    ReadOnly:  false
Events:        &lt;none&gt;</pre>
<p># kubectl describe pods</p>
<pre class="prettyprint">Name:         httpd-pod
Namespace:    default
Node:         kube-master/172.17.2.153
Start Time:   Fri, 23 Feb 2018 15:38:55 +0800
Labels:       &lt;none&gt;
Annotations:  &lt;none&gt;
Status:       Running
IP:           10.244.0.46
Containers:
  httpd-pod:
    Container ID:   docker://b7e5fd2732864934b732fdbd4bb24b3ccc8949c2e9d8832a36e271f2ee350b2b
    Image:          httpd
    Image ID:       docker-pullable://httpd@sha256:6e61d60e4142ea44e8e69b22f1e739d89e1dc8a2764182d7eecc83a5bb31181e
    Port:           &lt;none&gt;
    State:          Running
      Started:      Fri, 23 Feb 2018 15:38:59 +0800
    Ready:          True
    Restart Count:  0
    Environment:    &lt;none&gt;
    Mounts:
      /usr/local/apache2/htdocs from httpd-volume (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-bnkxx (ro)
Conditions:
  Type           Status
  Initialized    True
  Ready          True
  PodScheduled   True
Volumes:
  httpd-volume:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  pvc-demo
    ReadOnly:   false
  default-token-bnkxx:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-bnkxx
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  &lt;none&gt;
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type    Reason                 Age   From                  Message
  ----    ------                 ----  ----                  -------
  Normal  Scheduled              58m   default-scheduler     Successfully assigned httpd-pod to kube-master
  Normal  SuccessfulMountVolume  58m   kubelet, kube-master  MountVolume.SetUp succeeded for volume "default-token-bnkxx"
  Normal  SuccessfulMountVolume  58m   kubelet, kube-master  MountVolume.SetUp succeeded for volume "pv-demo"
  Normal  Pulling                58m   kubelet, kube-master  pulling image "httpd"
  Normal  Pulled                 58m   kubelet, kube-master  Successfully pulled image "httpd"
  Normal  Created                58m   kubelet, kube-master  Created container
  Normal  Started                58m   kubelet, kube-master  Started container</pre>
<p>通过返回打印的信息我们可以看到PVC已经成功mount到我们定义好的Pod的apache根目录下</p>
<p style="font-family:Helvetica, Arial, &quot;font-size:16px;vertical-align:baseline;color:#555555;background-color:#FFFFFF;">
	<span style="color:#337FE5;">7. 向Pod下apache根目录写入index.html</span>
</p>
<p style="font-family:Helvetica, Arial, &quot;font-size:16px;vertical-align:baseline;color:#555555;background-color:#FFFFFF;">
	# kubectl exec -ti httpd-pod -- /bin/sh -c "echo 'This is a persistent volume&nbsp;from httpd-pod' &gt; /usr/local/apache2/htdocs/index.html"
</p>
<p style="font-family:Helvetica, Arial, &quot;font-size:16px;vertical-align:baseline;color:#555555;background-color:#FFFFFF;">
	<span style="color:#337FE5;">8.确认文件写入</span>
</p>
<p style="font-family:Helvetica, Arial, &quot;font-size:16px;vertical-align:baseline;color:#555555;background-color:#FFFFFF;">
	# kubectl exec -ti httpd-pod -- cat /usr/local/apache2/htdocs/index.html&nbsp;
</p>
<pre class="prettyprint lang-bsh">This is a persistent volume from httpd-pod</pre>
<p><span style="color:#337FE5;">9. 删除并重新创建Pod来验证数据是否会随Pod销毁而丢失.</span> </p>
<p style="font-family:Helvetica, Arial, &quot;font-size:16px;vertical-align:baseline;color:#555555;background-color:#FFFFFF;">
	# kubectl delete pod httpd-pod<br />
# kubectl create -f pvpod.yaml<br />
# kubectl exec -ti httpd-pod -- cat /usr/local/apache2/htdocs/index.html&nbsp;
</p>
<pre class="prettyprint lang-bsh">This is a persistent volume from httpd-pod</pre>
<p><span style="color:#E53333;">可以看到我们之前写入的index.html仍旧存储在PVC中, 证明其不会随Pod销毁而丢失.</span> </p>
<p style="font-family:Helvetica, Arial, &quot;font-size:16px;vertical-align:baseline;color:#555555;background-color:#FFFFFF;">
	<span style="color:#337FE5;">10.查看Pod内网IP</span>
</p>
<p style="font-family:Helvetica, Arial, &quot;font-size:16px;vertical-align:baseline;color:#555555;background-color:#FFFFFF;">
	# kubectl get pods -o wide
</p>
<pre class="prettyprint lang-bsh">NAME        READY     STATUS    RESTARTS   AGE       IP            NODE
httpd-pod   1/1       Running   0          1m        10.244.0.46   kube-master</pre>
<p><span style="color:#337FE5;">11.利用curl验证写入的index.html</span> </p>
<p style="font-family:Helvetica, Arial, &quot;font-size:16px;vertical-align:baseline;color:#555555;background-color:#FFFFFF;">
	# curl 10.244.0.46
</p>
<pre class="prettyprint">This is a persistent volume from httpd-pod</pre>
<p style="font-family:Helvetica, Arial, &quot;font-size:16px;vertical-align:baseline;color:#555555;background-color:#FFFFFF;">
	这里我们成功在Pod下将PVC挂载到apache的家目录, 并返回HTML返回内容.
</p>
<p style="font-family:Helvetica, Arial, &quot;font-size:16px;vertical-align:baseline;color:#555555;background-color:#FFFFFF;">
	
</p>
<p style="font-family:Helvetica, Arial, &quot;font-size:16px;vertical-align:baseline;color:#555555;background-color:#FFFFFF;">
<p>
		本文相关代码:
	</p>
<p><a href="https://git.showerlee.com/showerlee/kube-deploy" rel="nofollow">https://git.showerlee.com/showerlee/kube-deploy</a></p>
</p>
<p style="font-family:Helvetica, Arial, &quot;font-size:16px;vertical-align:baseline;color:#555555;background-color:#FFFFFF;">
	大功告成...
</p>
<p style="font-family:Helvetica, Arial, &quot;font-size:16px;vertical-align:baseline;color:#555555;background-color:#FFFFFF;">
	</p>
<div>声明: 本文采用 <a rel="external" href="http://creativecommons.org/licenses/by-nc-sa/3.0/deed.zh" title="署名-非商业性使用-相同方式共享 3.0 Unported">CC BY-NC-SA 3.0</a> 协议进行授权</div><div>转载请注明来源：<a rel="external" title="DevOps技术分享" href="http://www.showerlee.com/archives/2280">DevOps技术分享</a></div><div>本文链接地址：<a rel="external" title="Kubernetes之Persistent Volume(持久化卷)" href="http://www.showerlee.com/archives/2280">http://www.showerlee.com/archives/2280</a></div>]]></content:encoded>
			<wfw:commentRss>http://www.showerlee.com/archives/2280/feed</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Kubernetes之Pod, Replicaset, Deployment, Label, Service</title>
		<link>http://www.showerlee.com/archives/2246</link>
		<comments>http://www.showerlee.com/archives/2246#comments</comments>
		<pubDate>Thu, 22 Feb 2018 07:16:24 +0000</pubDate>
		<dc:creator>showerlee</dc:creator>
				<category><![CDATA[DevTools]]></category>
		<category><![CDATA[Kubernetes]]></category>
		<category><![CDATA[k8s]]></category>

		<guid isPermaLink="false">http://www.showerlee.com/?p=2246</guid>
		<description><![CDATA[接着上一篇Kubernates1.9+Docker17离线安装部署, 给大家介绍Kubernetes的一些重要 [&#8230;]]]></description>
				<content:encoded><![CDATA[<p>
	接着上一篇<a href="http://www.showerlee.com/archives/2200" target="_blank">Kubernates1.9+Docker17离线安装部署</a>, 给大家介绍Kubernetes的一些重要概念和组件
</p>
<p>
	
</p>
<p>
	<span style="color:#337FE5;font-size:24px;"><strong>Pod:</strong></span>
</p>
<p>
	<span>Pod是一组紧密关联的容器集合，它们共享PID、IPC、Network和UTS namespace，是Kubernetes调度的基本单位。Pod的设计理念是支持多个容器在一个Pod中共享网络和文件系统，可以通过进程间通信和文件共享这种简单高效的方式组合完成服务.</span>
</p>
<p>
	<span style="color:#E53333;">缺点: 不支持高并发, 高可用, 当Pod当机后无法自动恢复.</span>
</p>
<p>
	<span style="color:#337FE5;">1.创建Pod</span>
</p>
<p>
	# vi pod.yaml
</p>
<pre class="prettyprint">apiVersion: v1
kind: Pod
metadata:
  name: demo
spec:
  containers:
  - image: httpd
    name: httpd
    imagePullPolicy: Always</pre>
<p>
	<span style="color:#337FE5;"><span style="color:#000000;"># kubectl create -f pod.yaml</span><span style="color:#000000;"></span><br />
</span>
</p>
<p>
	<span style="color:#337FE5;">2.查看Pod</span>
</p>
<p>
	# kubectl get pods
</p>
<pre class="prettyprint">NAME    READY     STATUS    RESTARTS   AGE
demo    1/1       Running      0       8d</pre>
<p>#&nbsp;kubectl describe pods</p>
<pre class="prettyprint">...</pre>
<p><span style="color:#337FE5;">3.删除Pod</span> </p>
<p>
	#&nbsp;kubectl delete pod demo
</p>
<p>
	
</p>
<p>
	
</p>
<p>
	<span style="color:#337FE5;font-size:24px;"><strong>Replicaset:</strong></span>
</p>
<p>
	<span>Replicaset在继承Pod的所有特性的同时, 它可以利用预先创建好的模板定义副本数量并自动控制, 通过改变Pod副本数量实现Pod的扩容和缩容</span>
</p>
<p>
	<span style="color:#E53333;">缺点: 无法修改template模板, 也就无法发布新的镜像版本</span>
</p>
<p>
	<span> </span>
</p>
<p>
	<span style="color:#337FE5;">1.创建</span><span style="color:#337FE5;">Replicaset</span>
</p>
<p>
	# vi r<span>eplicaset</span>.yaml
</p>
<pre class="prettyprint">apiVersion: apps/v1
kind: ReplicaSet
metadata:
  name: demo-rc
  labels:
    app: demo-rc
spec:
  replicas: 2
  selector:
    matchLabels:
      app: demo-rc
  template:
    metadata:
      labels:
        app: demo-rc
    spec:
      containers:
      - name: httpd
        image: httpd
        imagePullPolicy: Always</pre>
<p>
	<span><span style="color:#337FE5;"><span style="color:#000000;"># kubectl create -f r</span><span style="color:#000000;">eplicaset</span><span style="color:#000000;">.yaml</span><span style="color:#000000;"></span><br />
</span></span>
</p>
<p>
	<span><span style="color:#337FE5;">2.查看</span><span style="color:#337FE5;">replicaset</span></span>
</p>
<p>
	# kubectl get replicaset
</p>
<pre class="prettyprint">NAME      READY     STATUS    RESTARTS   AGE
demo-rc    1/1       Running      0       8d</pre>
<p><span>#&nbsp;kubectl describe replicaset</span> </p>
<pre class="prettyprint">...</pre>
<p><span><span style="color:#337FE5;">3.删除</span><span style="color:#337FE5;">replicaset</span></span> </p>
<p>
	#&nbsp;kubectl delete replicaset demo-rc
</p>
<p>
	
</p>
<p>
	
</p>
<p>
	<span style="color:#337FE5;font-size:24px;"><strong>Deployment</strong></span>
</p>
<p>
	<span><span>Deployment</span>在继承Pod和R<span>eplicaset</span>的所有特性的同时, 它可以实现对template模板进行实时滚动更新并具备我们线上的Application life circle的特性.</span>
</p>
<p>
	<span> </span>
</p>
<p>
	<span style="color:#337FE5;">1.创建</span><span style="color:#337FE5;">Deployment</span>
</p>
<p>
	# vi deployment.yaml
</p>
<pre class="prettyprint">apiVersion: apps/v1
kind: Deployment
metadata:
  name: httpd-deployment
  labels:
    app: httpd-deployment
spec:
  replicas: 2
  selector:
    matchLabels:
      app: httpd-demo
  template:
    metadata:
      labels:
        app: httpd-demo
    spec:
      containers:
      - name: httpd
        image: httpd
        imagePullPolicy: Always
        ports:
        - containerPort: 80
        env:
        - name: VERSION
          value: "v1"</pre>
<p>
	<span style="color:#337FE5;"><span><span style="color:#000000;"># </span><span style="color:#000000;">kubectl create -f deployment.yaml</span></span><span style="color:#000000;"></span><br />
</span>
</p>
<p>
	<span style="color:#337FE5;">2.查看Deployment</span>
</p>
<p>
	# kubectl get deployment
</p>
<pre class="prettyprint">NAME               DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
httpd-deployment   2         2         2            2           8d</pre>
<p># kubectl get pods -o wide</p>
<pre class="prettyprint">NAME                               READY     STATUS    RESTARTS   AGE       IP            NODE
httpd-deployment-956697567-8mqch   1/1       Running   0          8d        10.244.0.36   kube-master
httpd-deployment-956697567-wcbs6   1/1       Running   0          8d        10.244.0.37   kube-master</pre>
<p><span>#&nbsp;kubectl describe deployment</span><span></span> </p>
<pre class="prettyprint">...</pre>
<p>
	<span style="color:#337FE5;">3.更新deployment</span>
</p>
<p>
	通过此命令可以呼出vi编辑器对模板进行编辑.
</p>
<p>
	<span># kubectl edit -f deployment.yaml</span>
</p>
<p>
	<span><span>通过此命令使当前编辑结果生效.</span></span>
</p>
<p>
	<span># kubectl apply -f deployment.yaml<br />
</span>
</p>
<p>
	<span>再次查看可以看到老版本的deployment已经下架, 新版本的已经生效.</span>
</p>
<p>
	<span># kubectl get deployment</span>
</p>
<pre class="prettyprint">NAME                          DESIRED   CURRENT   READY     AGE
httpd-deployment-6b98d94474   0         0         0         1m
httpd-deployment-956697567    2         2         2         7m</pre>
<p>
	<span style="color:#337FE5;">4.扩容与缩容</span>
</p>
<p>
	<span><span style="color:#000000;">可以修改replicas的赋值对deployment进行扩容与缩容</span></span>
</p>
<p>
	<span>#&nbsp;&nbsp;kubectl scale deployment/httpd-deployment --replicas=1</span>
</p>
<p>
	
</p>
<p>
	<span style="color:#337FE5;">5.删除deployment</span>
</p>
<p>
	#&nbsp;kubectl delete deployment httpd-deployment
</p>
<p>
	
</p>
<p>
	<span style="font-size:24px;color:#337FE5;"><strong>Lable</strong></span>
</p>
<p>
	Label是attach到Pod的一对键/值对，用来传递用户定义的属性。比如，你可能创建了一个"tier"和“app”标签，通过Label（tier=frontend, app=myapp）来标记前端Pod容器，使用Label（tier=backend, app=myapp）标记后台Pod。然后可以使用Selectors选择带有特定Label的Pod，让具体某一个Pod或者Deployment去使用某一个Service实现特定的网络配置.
</p>
<p>
	
</p>
<p>
	<span style="color:#337FE5;font-size:24px;"><strong>Service</strong></span>
</p>
<p>
	<span>Service是应用服务的抽象，通过labels为应用提供负载均衡和服务发现。匹配labels的Pod IP和端口列表组成endpoints，由kube-proxy负责将服务IP负载均衡到这些endpoints上。<br />
每个Service都会自动分配一个cluster IP（仅在集群内部可访问的虚拟地址）和DNS名，其他容器可以通过该地址或DNS来访问服务，而不需要了解后端容器的运行。</span>
</p>
<p>
	<span style="color:#337FE5;">1.更改NodePort限制</span>
</p>
<p>
	<span>Kubernetes默认对外的NodePort限制范围为30000-32767, 这里如果要使用一些常用的端口(80, 8080, 443)需将这个范围放大.</span>
</p>
<p>
	<span>#&nbsp;vi /etc/kubernetes/manifests/kube-apiserver.yaml</span>
</p>
<p>
	<span>在--service-cluster-ip-range与insecure-port间添加如下node port配置</span>
</p>
<pre class="prettyprint lang-bsh">...
- --service-cluster-ip-range=10.96.0.0/12
- --service-node-port-range=0-32767
- --insecure-port=0
....</pre>
<p>
	重启服务
</p>
<p>
	# systemctl restart kubelet
</p>
<p>
	<span><br />
<span style="color:#337FE5;">2.创建Service</span></span>
</p>
<p>
	<span><span style="color:#000000;"># vi svc.yaml</span></span>
</p>
<pre class="prettyprint">apiVersion: v1
kind: Service
metadata:
  name: demo
spec:
  type: NodePort
  ports:
    - port: 80
      nodePort: 80
  selector:
    app: httpd-demo</pre>
<p>
	<span style="color:#337FE5;"> </span>
</p>
<p>
	<span style="color:#000000;"># kubectl create -f svc.yaml</span>
</p>
<p>
	<span style="color:#E53333;">Tip: 如果要对某一Pod或deployment添加对外访问端口,&nbsp;</span><span style="color:#E53333;">这里service添加的selector的键值需与之相对应.</span>
</p>
<p>
	
</p>
<p>
	<span style="color:#337FE5;">3.查看开放端口</span>
</p>
<p>
	# kubectl get svc demo
</p>
<pre class="prettyprint lang-bsh">NAME      TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)     AGE
demo      NodePort   10.100.96.157   &lt;none&gt;        80:80/TCP   1h</pre>
<p># kubectl describe service demo</p>
<pre class="prettyprint lang-bsh">Name:                     demo
Namespace:                default
Labels:                   &lt;none&gt;
Annotations:              &lt;none&gt;
Selector:                 app=httpd-demo
Type:                     NodePort
IP:                       10.100.96.157
Port:                     &lt;unset&gt;  80/TCP
TargetPort:               80/TCP
NodePort:                 &lt;unset&gt;  80/TCP
Endpoints:                10.244.0.36:80,10.244.0.37:80
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   &lt;none&gt;</pre>
<p>
	<span style="color:#337FE5;">4.验证外网是否可以访问</span>
</p>
<p>
	<a href="http://www.showerlee.com/archives/2246/%e6%88%aa%e5%9b%be00"><img onerror="javascript:this.src='http://www.showerlee.com/wp-content/themes/BYMT/images/images_error.jpg'" src="http://www.showerlee.com/wp-content/uploads/2018/02/截图00.png" alt="截图00" width="479" height="154" class="alignnone size-full wp-image-2260" /></a>
</p>
<p>
	
</p>
<p><p>
		本文相关代码:
	</p>
<p><a href="https://git.showerlee.com/showerlee/kube-deploy" rel="nofollow">https://git.showerlee.com/showerlee/kube-deploy</a></p>
</p>
<p>
	Finished...</p>
<div>声明: 本文采用 <a rel="external" href="http://creativecommons.org/licenses/by-nc-sa/3.0/deed.zh" title="署名-非商业性使用-相同方式共享 3.0 Unported">CC BY-NC-SA 3.0</a> 协议进行授权</div><div>转载请注明来源：<a rel="external" title="DevOps技术分享" href="http://www.showerlee.com/archives/2246">DevOps技术分享</a></div><div>本文链接地址：<a rel="external" title="Kubernetes之Pod, Replicaset, Deployment, Label, Service" href="http://www.showerlee.com/archives/2246">http://www.showerlee.com/archives/2246</a></div>]]></content:encoded>
			<wfw:commentRss>http://www.showerlee.com/archives/2246/feed</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Kubernates1.9+Docker17离线安装部署</title>
		<link>http://www.showerlee.com/archives/2200</link>
		<comments>http://www.showerlee.com/archives/2200#comments</comments>
		<pubDate>Tue, 13 Feb 2018 09:00:46 +0000</pubDate>
		<dc:creator>showerlee</dc:creator>
				<category><![CDATA[DevTools]]></category>
		<category><![CDATA[Kubernetes]]></category>
		<category><![CDATA[k8s]]></category>

		<guid isPermaLink="false">http://www.showerlee.com/?p=2200</guid>
		<description><![CDATA[最近研究了一下目前比较火的Kubernetes(k8s), 重点关注了下它最新的1.9版本, 这个版本较老的1 [&#8230;]]]></description>
				<content:encoded><![CDATA[<p>
	<a href="http://www.showerlee.com/archives/2200/image"><img onerror="javascript:this.src='http://www.showerlee.com/wp-content/themes/BYMT/images/images_error.jpg'" src="http://www.showerlee.com/wp-content/uploads/2018/02/image.png" alt="image" width="431" height="117" class="alignnone size-full wp-image-2230" /></a>
</p>
<p>
	
</p>
<p>
	最近研究了一下目前比较火的Kubernetes(<a href="http://www.showerlee.com/archives/tag/k8s" title="查看k8s中的全部文章" class="tag_link">k8s</a>), 重点关注了下它最新的1.9版本, 这个版本较老的1.1版本的确简化了很多配置, 它利用kubeadm这个工具对全局进行批量化部署, 减轻了我们初学者起步的学习成本.&nbsp;
</p>
<p>
	目前主流的安装<a href="http://www.showerlee.com/archives/tag/k8s" title="查看k8s中的全部文章" class="tag_link">k8s</a>系统平台有Centos7和ubuntu, 这里笔者因为对centos有常年的运维开发经验, 所以就选择前者.
</p>
<p>
	另外官方已经在近期将kubernetes1.1版本的centos7配置从官网移除, 所以建议<span>大家使用1.9版本完成所有的安装部署.</span>
</p>
<p>
	本文档推荐给大家的原因是因为目前网上基本上没有一个较为完整和正确率较高的<a href="http://www.showerlee.com/archives/tag/k8s" title="查看k8s中的全部文章" class="tag_link">k8s</a>的安装文档, 笔者因此整合了网上零散的<a href="http://www.showerlee.com/archives/tag/k8s" title="查看k8s中的全部文章" class="tag_link">k8s</a>资源, 给大家提供一个较为靠谱的离线安装<a href="http://www.showerlee.com/archives/tag/k8s" title="查看k8s中的全部文章" class="tag_link">k8s</a> 1.9版本的安装范例.
</p>
<p>
	
</p>
<p>
	什么是Kubernates?
</p>
<p>
	<span>Kubernetes 用于自动部署、扩展和管理容器化（containerized）应用程序的开源系统。它旨在提供“跨主机集群的自动部署、扩展以及运行应用程序容器的平台”。它支持一系列容器工具, 目前主流会使用Docker作为他的主流配置容器.</span>
</p>
<p>
	<span> </span>
</p>
<p>
	为什么要使用离线安装呢?
</p>
<p>
	因为kubeadm默认要从google仓库下载镜像，但目前国内无法访问google仓库，所以这里从网上找到1.9的离线安装包，大家只需要将离线包的镜像导入到相应节点即可.
</p>
<p>
	
</p>
<p>
	<span>Let's start...</span>
</p>
<p>
	
</p>
<p>
	<span style="color:#337FE5;font-size:16px;"><span style="color:#337FE5;font-family:Helvetica;font-size:16px;background-color:#FFFFFF;"><strong>安装环境</strong></span></span>
</p>
<p>
	Local Desktop: MacOS
</p>
<p>
	Virtual Machine: Virtual Box
</p>
<p>
	Virtual System: CentOS 7.4
</p>
<p>
	Kubernetes: Kubernetes1.9
</p>
<p>
	Docker:&nbsp;17.03.2-ce
</p>
<p>
	kube-master 10.110.16.10
</p>
<p>kube-node-1 10.110.16.11</p>
<p>
	
</p>
<p>
	<span style="color:#337FE5;font-family:Helvetica;font-size:16px;background-color:#FFFFFF;"><strong>一. 系统环境配置</strong></span>
</p>
<p>
	<span style="color:#E53333;">(Master Node都需要配置)</span>
</p>
<p>
	<span style="color:#337FE5;">1.下载离线安装包</span>
</p>
<p>
	<span style="color:#333333;font-family:-apple-system, system-ui, BlinkMacSystemFont, &quot;font-size:14px;background-color:#FFFFFF;">链接:&nbsp;</span><a href="https://pan.baidu.com/s/1c2O1gIW" target="_blank">https://pan.baidu.com/s/1c2O1gIW</a><span style="color:#333333;font-family:-apple-system, system-ui, BlinkMacSystemFont, &quot;font-size:14px;background-color:#FFFFFF;">&nbsp;密码: 9s92</span>
</p>
<p>
	<span>下载到本地后, 上传到虚拟机root根目录</span>
</p>
<p>
	# tar jxvf&nbsp;<a href="http://www.showerlee.com/archives/tag/k8s" title="查看k8s中的全部文章" class="tag_link">k8s</a>_images.tar.bz2
</p>
<p>
	<span style="color:#337FE5;">2. 安装依赖</span>
</p>
<p>
	<span style="color:#E53333;">Tip: 这里需要更新CentOS7内核到最新版本才能打开centos的路由功能以供k8s使用, 更新完毕需要重启系统使kernel新版本生效</span>
</p>
<p>
	# yum install policycoreutils-python libtool-ltdl libseccomp device-mapper-libs kernel ntpdate
</p>
<p>
	# reboot
</p>
<p>
	<span style="color:#337FE5;">3.绑定本地host</span>
</p>
<p>
	<span># echo "10.110.16.10 kube-master" &gt;&gt; /etc/hosts</span>
</p>
<p>
	<span># echo "10.110.16.11 kube-node-1" &gt;&gt; /etc/hosts</span>
</p>
<p>
	<span style="color:#337FE5;">4.添加kube-master到kube-node-1的秘钥认证<span style="color:#E53333;">(仅Master需要配置)</span></span>
</p>
<p>
	<span># ssh-keygen</span>
</p>
<p>
	<span># ssh-copy-id kube-node-1</span>
</p>
<p>
	<span style="color:#337FE5;">5.关闭SELINUX和firewall</span>
</p>
<p>
	<span># vi /etc/sysconfig/selinux</span>
</p>
<pre class="prettyprint lang-bsh">...
SELINUX=disabled 
...</pre>
<p><span># setenforce 0</span></p>
<p>
	<span># systemctl stop firewalld&nbsp; &amp;&amp; systemctl disable firewalld</span>
</p>
<p>
	<span style="color:#337FE5;">6.路由配置</span>
</p>
<p>
	<span><span style="color:#000000;">#</span><span style="color:#000000;">&nbsp;</span><span style="color:#000000;">modprobe br_netfilter</span><span style="color:#337FE5;"></span><span style="color:#000000;"></span></span>
</p>
<p>
	<span># echo "net.bridge.bridge-nf-call-ip6tables = 1" &gt;&gt; /etc/sysctl.conf</span>
</p>
<p>
	<span># echo "net.bridge.bridge-nf-call-iptables = 1" &gt;&gt; /etc/sysctl.conf</span>
</p>
<p>
	<span># sysctl -p</span>
</p>
<p>
	<span style="color:#337FE5;">7.关闭Swap分区</span>
</p>
<p>
	大家在安装CentOS7时如果配置了swap分区, 这里由于k8s禁止系统开启此功能, 需要提前进行关闭, 没有配置swap请无视.
</p>
<p>
	查看swap分区目录
</p>
<p>
	# swapon -s
</p>
<pre class="prettyprint lang-bsh">Filename                                Type            Size    Used    Priority
/swapfile                               file    1996796 1214364 -1</pre>
<p>
	关闭swap分区
</p>
<p>
	# swapoff /swapfile
</p>
<p>
	
</p>
<p>
	注释掉swap分区, 禁止开机启动swap
</p>
<p>
	# vi /etc/fstab
</p>
<pre class="prettyprint lang-bsh">...
#/dev/mapper/centos-swap swap                    swap    defaults        0 0
...</pre>
<p><span style="color:#337FE5;">8.开启系统时间同步</span> </p>
<p>
	<span style="color:#000000;"># systemctl enable ntpdate &amp;&amp; systemctl start ntpdate</span>
</p>
<p>
	
</p>
<p>
	
</p>
<p>
	
</p>
<p>
	<strong><span style="color:#337FE5;font-size:18px;">二. Docker配置</span></strong>
</p>
<p>
	<span style="color:#E53333;">(Master Node都需要配置)</span>
</p>
<p>
	<span style="color:#337FE5;">1.安装docker-ce17.03</span>
</p>
<p># rpm -ihv docker-ce-selinux-17.03.2.ce-1.el7.centos.noarch.rpm<br />
# rpm -ivh docker-ce-17.03.2.ce-1.el7.centos.x86_64.rpm</p>
<p><span style="color:#337FE5;">2.修改Docker使用阿里的镜像</span></p>
<p>
	<span style="color:#E53333;">Tip: 访问</span><span style="color:#E53333;"><a href="https://cr.console.aliyun.com/#/accelerator" rel="nofollow">https://cr.console.aliyun.com/#/accelerator</a></span><span style="color:#E53333;">注册账号, 在镜像加速器里获取个人的仓库地址, 并填入如下配置文件</span>
</p>
<p>
	<span style="color:#000000;"># mkdir /etc/docker</span>
</p>
<p># vi /etc/docker/daemon.json</p>
<pre class="prettyprint lang-bsh">...
{
  "registry-mirrors": ["https://xxxxxxx.mirror.aliyuncs.com"]
}
...</pre>
<p><span style="color:#337FE5;">3.重启</span><span style="color:#337FE5;">docker</span><span style="color:#337FE5;">并设置开机启动</span><br />
# systemctl daemon-reload<br />
# systemctl restart docker<br />
# systemctl enable docker</p>
<p>
	<span style="color:#337FE5;">4.本地载入我们的离线镜像</span>
</p>
<p>
	<span><span style="color:#000000;"># cd /root</span></span>
</p>
<p>
	<span style="color:#000000;"># mv&nbsp;/root/k8s_images/docker_images/etcd-amd64\:v3.1.10.tar /root/k8s_images/docker_images/etcd-amd64_v3.1.10.tar</span>
</p>
<p># vi load_image.sh</p>
<pre class="prettyprint lang-bsh">docker load &lt; /root/k8s_images/docker_images/etcd-amd64_v3.1.10.tar
docker load &lt;/root/k8s_images/docker_images/flannel_v0.9.1-amd64.tar
docker load &lt;/root/k8s_images/docker_images/k8s-dns-dnsmasq-nanny-amd64_v1.14.7.tar
docker load &lt;/root/k8s_images/docker_images/k8s-dns-kube-dns-amd64_1.14.7.tar
docker load &lt;/root/k8s_images/docker_images/k8s-dns-sidecar-amd64_1.14.7.tar
docker load &lt;/root/k8s_images/docker_images/kube-apiserver-amd64_v1.9.0.tar
docker load &lt;/root/k8s_images/docker_images/kube-controller-manager-amd64_v1.9.0.tar
docker load &lt;/root/k8s_images/docker_images/kube-scheduler-amd64_v1.9.0.tar
docker load &lt; /root/k8s_images/docker_images/kube-proxy-amd64_v1.9.0.tar
docker load &lt;/root/k8s_images/docker_images/pause-amd64_3.0.tar
docker load &lt; /root/k8s_images/kubernetes-dashboard_v1.8.1.tar</pre>
<p>
	# sh load_image.sh
</p>
<p>
	
</p>
<p>
	<span style="color:#337FE5;font-family:Helvetica;font-size:16px;background-color:#FFFFFF;"><strong>三.Kubernetes配置</strong></span>
</p>
<p>
	<span style="color:#337FE5;">1.安装kubelet kubeadm kubectl</span><span style="color:#E53333;background-color:#FFFFFF;">(Master Node都需要配置)</span><span></span>
</p>
<p># cd k8s_images<br />
# rpm -ivh socat-1.7.3.2-2.el7.x86_64.rpm<br />
# rpm -ivh kubernetes-cni-0.6.0-0.x86_64.rpm&nbsp; kubelet-1.9.9-9.x86_64.rpm&nbsp;kubectl-1.9.0-0.x86_64.rpm<br />
# rpm -ivh kubeadm-1.9.0-0.x86_64.rpm</p>
<p>
	<span style="color:#337FE5;">2.</span><span style="color:#337FE5;">重启kubelet</span><span><span style="color:#337FE5;">并设置开机启动</span><span style="color:#E53333;background-color:#FFFFFF;">(Master Node都需要配置)</span><span></span></span>
</p>
<p># systemctl enable kubelet &amp;&amp; sudo systemctl start kubelet</p>
<p>
	<span style="color:#337FE5;">3.修改docker配置文件中的Cgroup Driver参数与k8s一致</span><span style="color:#E53333;background-color:#FFFFFF;">(Master Node都需要配置)</span><span></span>
</p>
<p># docker info |grep "Cgroup Driver"<br />
如果<span>"Cgroup Driver"value为</span> "cgroupfs", 将其值写入kubeadm配置文件<br />
# vim /etc/systemd/system/kubelet.service.d/10-kubeadm.conf</p>
<pre class="prettyprint lang-bsh">...
Environment="KUBELET_CGROUP_ARGS=--cgroup-driver="cgroupfs"
...</pre>
<p># systemctl daemon-reload &amp;&amp; systemctl restart kubelet</p>
<p>
	<span style="color:#337FE5;">4.Master初始化</span><span style="color:#E53333;background-color:#FFFFFF;">(Master 需要配置)</span><span></span>
</p>
<p># kubeadm reset</p>
<p>
	# kubeadm init --kubernetes-version=v1.9.0 --pod-network-cidr=10.244.0.0/16
</p>
<p>
	<span style="color:#E53333;">Tip: 这里的</span><span><span style="color:#E53333;">10.244.0.0/16与默认的配置文件网段一致, 如需调整请在后面的</span><span style="color:#E53333;">kube-</span><span style="color:#E53333;">flannel.yml文件</span><span style="color:#E53333;">做相应改变</span></span>
</p>
<p>
	如果一切没问题会output一个token返回命令
</p>
<pre class="prettyprint"># kubeadm join --token 288f34.481c8faa5636966f 10.110.16.10:6443 --discovery-token-ca-cert-hash sha256:8036fac3b76e1a0dd189edaa8f7d36f2b51429dd0c0cf7ea0d78e7972d611002</pre>
<p>
	24小时后这个token会失效, 需要重新生成, 使用如下命令进行生成.
</p>
<p># kubeadm token create</p>
<p>
	# kubeadm join --token "Your token code" "Your master ip address":6443
</p>
<p>
	<span style="color:#333333;font-family:-apple-system, system-ui, BlinkMacSystemFont, &quot;font-size:14px;background-color:#FFFFFF;">如果在24小时内忘记了，可以用如下命令获取.</span>
</p>
<p>
	# kubeadm token list
</p>
<p>
	
</p>
<p>
	<span style="color:#337FE5;">5.设置用户环境变量</span><span style="color:#E53333;background-color:#FFFFFF;">(Master 需要配置)</span><span></span>
</p>
<p>如果你使用的是root<br />
# echo "export KUBECONFIG=/etc/kubernetes/admin.conf" &gt;&gt; ~/.bash_profile<br />
# source ~/.bash_profile</p>
<p>
	如果是非root用户
</p>
<p># mkdir -p $HOME/.kube<br />
# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config<br />
# sudo chown $(id -u):$(id -g) $HOME/.kube/config</p>
<p>
	测试kubectl版本
</p>
<p>
	# kubectl version
</p>
<pre class="prettyprint lang-bsh">Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.0", GitCommit:"925c127ec6b946659ad0fd596fa959be43f0cc05", GitTreeState:"clean", BuildDate:"2017-12-15T21:07:38Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.0", GitCommit:"925c127ec6b946659ad0fd596fa959be43f0cc05", GitTreeState:"clean", BuildDate:"2017-12-15T20:55:30Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"}</pre>
<p>
	
</p>
<p>
	<span style="color:#337FE5;">6.使用flannel组件进行k8s网络配置</span><span style="color:#E53333;background-color:#FFFFFF;">(Master 需要配置)</span><span></span>
</p>
<p># wget&nbsp;<a href="https://raw.githubusercontent.com/coreos/flannel/v0.9.1/Documentation/kube-flannel.yml" rel="nofollow">https://raw.githubusercontent.com/coreos/flannel/v0.9.1/Documentation/kube-flannel.yml</a></p>
<p>
	<span style="color:#E53333;">Tip: 这里的yml配置文件内的网段配置需与上面介绍的Master初始化的配置网段</span><span style="color:#E53333;">"--pod-network-</span><span style="color:#E53333;">cidr=10.244.0.0/16" 一致</span>
</p>
<p>
	# kubectl create -f kube-flannel.yml
</p>
<p>
	
</p>
<p><span style="color:#337FE5;">7.查看所有pod详细信息</span><span style="color:#E53333;background-color:#FFFFFF;">(Master 需要配置)</span></p>
<p>
	# kubectl get pod --all-namespaces -o wide
</p>
<p>
	# kubectl describe pods
</p>
<p>
	
</p>
<p>
	<span style="color:#337FE5;">8.添加Node</span><span style="color:#E53333;background-color:#FFFFFF;">(Node 需要配置)</span>
</p>
<p>
	从Master初始化输出中获取
</p>
<p># kubeadm join --token 288f34.481c8faa5636966f 10.110.16.10:6443 --discovery-token-ca-cert-hash&nbsp;sha256:8036fac3b76e1e0dd189edaa8f7d36f1b51429dd0c0cf7ea0d78e7972d611002</p>
<p>
	<span style="color:#E53333;">Tip:这里作为可选, 因为笔者将Master加入Node调度中, 让其同时充当Master和Node的角色, 如果资源有限只有一台测试机的小伙伴这步可以略过.</span>
</p>
<p>
	
</p>
<p><span style="color:#337FE5;">9.将Master加入Node schedual调度</span><span style="color:#E53333;background-color:#FFFFFF;">(Master 需要配置)</span><span></span></p>
<p>
	<span style="color:#E53333;">Tip: 默认Master不会加入Node调度, 这里使用如下命令开启这个限制.</span>
</p>
<p>
	# kubectl taint nodes kube-master node-role.kubernetes.io/master-
</p>
<p>
	如果我们有多个node需要手动去做调度, 从而不让我们的pod进入该node调度列表, 可以使用如下命令:
</p>
<p>
	禁用该node调度
</p>
<p>
	# kubectl cordon <span>kube-node-1</span>
</p>
<p>
	<span>查看是否禁用</span>
</p>
<p>
	<span># kubectl get node kube-node-1<br />
</span>
</p>
<pre class="prettyprint lang-bsh">NAME          STATUS                     ROLES     AGE       VERSION
kube-node-1   Ready,SchedulingDisabled   node    11d       v1.9.0</pre>
<p>
	
</p>
<p>
	<span><span>解禁该node调度</span><br />
</span>
</p>
<p>
	#&nbsp;kubectl uncordon <span>kube-node-1</span>
</p>
<p>
	查看是否解禁
</p>
<p>
	<span># kubectl get nodes&nbsp;<span>kube-node-1</span></span>
</p>
<pre class="prettyprint lang-bsh">NAME          STATUS  ROLES     AGE       VERSION
kube-node-1   Ready   node    11d       v1.9.0</pre>
<p>
	
</p>
<p>
	<span style="color:#337FE5;">10.测试K8s cluster</span><span style="color:#E53333;background-color:#FFFFFF;">(Master 需要配置)</span><span></span><span></span>
</p>
<p>
	我们利用k8s创建一个apache的网站实例, 镜像为httpd, 并设置2个副本.&nbsp;
</p>
<p># kubectl run httpd-app --image=httpd --replicas=2</p>
<p>
	# kubectl get deployment
</p>
<pre class="prettyprint lang-bsh">NAME        DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
httpd-app   2         2         2            2           2h</pre>
<p>
	# kubectl get pods -o wide
</p>
<pre class="prettyprint lang-bsh">NAME                         READY     STATUS    RESTARTS   AGE       IP            NODE
httpd-app-5fbccd7c6c-27nzv   1/1       Running   0          2h        10.244.0.13   kube-master
httpd-app-5fbccd7c6c-n9qs2   1/1       Running   0          2h        10.244.0.12   kube-master</pre>
<p>
	
</p>
<p>
	<span style="color:#337FE5;">11.测试网站实例</span><span style="color:#E53333;background-color:#FFFFFF;">(Master 需要配置)</span><span></span><span></span>
</p>
<p>
	#&nbsp;curl 10.244.0.13
</p>
<pre class="prettyprint lang-bsh">&lt;html&gt;&lt;body&gt;&lt;h1&gt;It works!&lt;/h1&gt;&lt;/body&gt;&lt;/html&gt;</pre>
<p>#&nbsp;<span>curl 10.244.0.12</span> </p>
<pre class="prettyprint lang-bsh">&lt;html&gt;&lt;body&gt;&lt;h1&gt;It works!&lt;/h1&gt;&lt;/body&gt;&lt;/html&gt;</pre>
<p>
	
</p>
<p>
	<span style="color:#337FE5;">12.安装kubernetes-dashboard<span style="color:#E53333;background-color:#FFFFFF;">(Master 需要配置)</span></span>
</p>
<p>
	<span style="color:#E53333;"><span style="background-color:#FFFFFF;color:#E53333;">Tip:默认kubernetes是没有图形管理界面, 我们这里通过添加一个dashboard容器去添加一个可管理的GUI界面</span></span>
</p>
<p>
	# cd&nbsp;<span>/root/k8s_images</span>
</p>
<p>
	添加dashboard容器
</p>
<p>
	#&nbsp;kubectl create -f kubernetes-dashboard.yaml
</p>
<p>
	创建认证文件
</p>
<p>
	#&nbsp;vi /etc/kubernetes/pki/basic_auth_file
</p>
<pre class="prettyprint lang-bsh">#user,password,userid
admin,admin,2</pre>
<p>
	配置dashboard basic认证
</p>
<p>
	#&nbsp;vi /etc/kubernetes/manifests/kube-apiserver.yaml
</p>
<p>
	添加一行:
</p>
<p>
	- --basic_auth_file=/etc/kubernetes/pki/basic_auth_file
</p>
<p>
	如下:
</p>
<pre class="prettyprint">...
- --etcd-servers=http://127.0.0.1:2379
- --basic_auth_file=/etc/kubernetes/pki/basic_auth_file
image: gcr.io/google_containers/kube-apiserver-amd64:v1.9.0
...</pre>
<p>更新认证配置</p>
<p>
	<s>#&nbsp;</s><s>kubectl apply -f /etc/kubernetes/manifests/kube-apiserver.yaml </s>(可不执行, 待验证)
</p>
<p>
	# systemctl restart kubelet
</p>
<p>
	将admin账号加入cluster-admin的role, 获取k8s最高系统权限
</p>
<p>
	#&nbsp;kubectl create clusterrolebinding login-on-dashboard-with-cluster-admin --clusterrole=cluster-admin --user=admin
</p>
<p>
	访问dashboard主页
</p>
<p><a href="https://10.110.16.10:32666" rel="nofollow">https://10.110.16.10:32666</a></p>
<p>
	账号/密码: admin/admin
</p>
<p>
	<a href="http://www.showerlee.com/archives/2200/k8s-dashboard"><img onerror="javascript:this.src='http://www.showerlee.com/wp-content/themes/BYMT/images/images_error.jpg'" src="http://www.showerlee.com/wp-content/uploads/2018/02/k8s-dashboard.png" alt="k8s-dashboard" width="1024" height="458" class="alignnone size-full wp-image-2502" /></a>
</p>
<p>
	<a href="http://www.showerlee.com/archives/2200/overview"><img onerror="javascript:this.src='http://www.showerlee.com/wp-content/themes/BYMT/images/images_error.jpg'" src="http://www.showerlee.com/wp-content/uploads/2018/02/overview.png" alt="overview" width="1024" height="553" class="alignnone size-large wp-image-2500" /></a>
</p>
<p>
	
</p>
<p>
	大功告成...
</p>
<p>
	
</p>
<p>
	更多kubernetes组件介绍:
</p>
<p>
	<span id="__kindeditor_bookmark_start_0__" style="font-family:Helvetica;font-size:13px;vertical-align:baseline;color:#111111;background-color:#FFFFFF;"><a href="http://www.showerlee.com/archives/2246" target="_blank">Kubernetes之Pod, Replicaset, Deployment, Label, Service</a></span>
</p>
<p>
	<a href="http://www.showerlee.com/archives/2280" target="_blank">Kubernetes之Persistent Volume(持久化卷)</a>
</p>
<p>
	<a href="http://www.showerlee.com/archives/2308" target="_blank">Kubernetes之Secrets与Config Maps</a>
</p>
<p>
	<a href="http://www.showerlee.com/archives/2455" target="_blank">Kubernetes之Helm包管理</a>
</p>
<p></p>
<div>声明: 本文采用 <a rel="external" href="http://creativecommons.org/licenses/by-nc-sa/3.0/deed.zh" title="署名-非商业性使用-相同方式共享 3.0 Unported">CC BY-NC-SA 3.0</a> 协议进行授权</div><div>转载请注明来源：<a rel="external" title="DevOps技术分享" href="http://www.showerlee.com/archives/2200">DevOps技术分享</a></div><div>本文链接地址：<a rel="external" title="Kubernates1.9+Docker17离线安装部署" href="http://www.showerlee.com/archives/2200">http://www.showerlee.com/archives/2200</a></div>]]></content:encoded>
			<wfw:commentRss>http://www.showerlee.com/archives/2200/feed</wfw:commentRss>
		<slash:comments>5</slash:comments>
		</item>
	</channel>
</rss>
