[TIL][Docker] Docker SwarmKit 學習紀錄

前言

其實之前就跑過一次,不過沒有記錄下來詳細的指令.這次為了比較 Docker Swarm , Docker Swarm Kit 還有 Docker Swarm Mode,又把整個流程跑了一次.

稍微做個紀錄.

SwarmKit 使用流程

安裝 SwarmKit

由於 Open Source Project,需要另外安裝.

go get -u github.com/docker/swarmkit

cd $GOPATH/src/github.com/docker/swarmkit
make binaries
mv bin/* $GOBIN

注意: 如果你不是使用 Brew 安裝 Golang 你可能不會有 $GOBIN 參數,可以改到 $GOPATH/bin

建立 Cluster Master Node

swarmd -d /tmp/node-1 --listen-control-api /tmp/manager1/swarm.sock --hostname node-1

要注意解釋一下,其中 /tmp/manager1/swarm.sock 是 SwarmKit socket 的位址.如果其他的 node 要加入,一定要將環境變數加入.

export SWARM_SOCKET=/tmp/manager1/swarm.sock

然後我們來查詢,現在 Master 的 token (在新的 console)

export SWARM_SOCKET=/tmp/manager1/swarm.sock

swarmctl cluster inspect default

>ID          : 1piq7f9tr1xlmnui4xhjhsafi
>Name        : default
>Orchestration settings:
>  Task history entries: 5
>Dispatcher settings:
>  Dispatcher heartbeat period: 5s
>Certificate Authority settings:
>  Certificate Validity Duration: 2160h0m0s
>  Join Tokens:
>    Worker: >SWMTKN-1-1wttj6u10f9fueptptma9ohf99zcxt0gia1wt3a5odphi6nt1f-c4y428p7wwr23efwo4xw6qiwz
>    Manager: >SWMTKN-1-1wttj6u10f9fueptptma9ohf99zcxt0gia1wt3a5odphi6nt1f-cdh5ucqp1xjvh3pp1rvs0two4

建立 Cluster Worker Node

節點 2 (Node2)

swarmd -d /tmp/node-2 --hostname node-2 --join-addr 127.0.0.1:4242 --join-token SWMTKN-1-1wttj6u10f9fueptptma9ohf99zcxt0gia1wt3a5odphi6nt1f-c4y428p7wwr23efwo4xw6qiwz

其中注意, --join-token 一定要加入,不然會找不到.請使用查詢到的,勿用到我提供的 :p

節點 3 (Node3)

swarmd -d /tmp/node-3 --hostname node-3 --join-addr 127.0.0.1:4242 --join-token SWMTKN-1-1wttj6u10f9fueptptma9ohf99zcxt0gia1wt3a5odphi6nt1f-c4y428p7wwr23efwo4xw6qiwz

確認節點都有建立成功

export SWARM_SOCKET=/tmp/manager1/swarm.sock                                                                           swarmctl node ls                                                                                                       

二  8/ 9 03:02:10 2016
ID                         Name    Membership  Status  Availability  Manager Status
--                         ----    ----------  ------  ------------  --------------
01kgkezj7wcwij5qcp78raz1i  node-1  ACCEPTED    READY   ACTIVE        REACHABLE *
2ocq4129a4y2nq23hajdsqv0t  node-3  ACCEPTED    READY   ACTIVE
b75cdesh7to4lb35wg17ul12x  node-2  ACCEPTED    READY   ACTIVE

建立一個 Service Redis 3.0.5

swarmctl service create --name redis --image redis:3.0.5

確認 Service 有成功

swarmctl service ls

ID                         Name   Image        Replicas
--                         ----   -----        --------
94h7xat76kjd50as63f5qtsex  redis  redis:3.0.5  1/1

觀察 Service 細節

swarmctl service inspect redis

ID                : 94h7xat76kjd50as63f5qtsex
Name              : redis
Replicas          : 1/1
Template
 Container
  Image           : redis:3.0.5

Task ID                      Service    Slot    Image          Desired State    Last State              Node
-------                      -------    ----    -----          -------------    ----------              ----
49d41k08a4bifkeo67xv00f5c    redis      1       redis:3.0.5    RUNNING          RUNNING 1 minute ago    node-1

Scale Service using SwarmKit

swarmctl service update redis --replicas 6

swarmctl service ls                                                                                                    

ID                         Name   Image        Replicas
--                         ----   -----        --------
94h7xat76kjd50as63f5qtsex  redis  redis:3.0.5  1/6


swarmctl service ls                                                                                                    

ID                         Name   Image        Replicas
--                         ----   -----        --------
94h7xat76kjd50as63f5qtsex  redis  redis:3.0.5  6/6

檢查更多詳細的參數

swarmctl service inspect redis                                                                                         二  8/ 9 10:02:45 2016
ID                : 94h7xat76kjd50as63f5qtsex
Name              : redis
Replicas          : 6/6
Template
 Container
  Image           : redis:3.0.5

Task ID                      Service    Slot    Image          Desired State    Last State                Node
-------                      -------    ----    -----          -------------    ----------                ----
49d41k08a4bifkeo67xv00f5c    redis      1       redis:3.0.5    RUNNING          RUNNING 2 minutes ago     node-1
83av0fuyn7wqqk32fvpmrtu2o    redis      2       redis:3.0.5    RUNNING          RUNNING 27 seconds ago    node-3
9lvd4xtwxlbskktxa7asa04fe    redis      3       redis:3.0.5    RUNNING          RUNNING 28 seconds ago    node-2
74ca5vzx1wbviycrmuq8tb1mi    redis      4       redis:3.0.5    RUNNING          RUNNING 27 seconds ago    node-2
6rwgiz70onihivlex5jdi96fj    redis      5       redis:3.0.5    RUNNING          RUNNING 27 seconds ago    node-1
brp9u9fkk26xs6cmrseye4hcu    redis      6       redis:3.0.5    RUNNING          RUNNING 27 seconds ago    node-3

更新服務到 3.0.6

預設並沒有使用 Rolling Update ,會直接更新到最新版本.

swarmctl service update redis --image redis:3.0.6                                                                      

94h7xat76kjd50as63f5qtsex


swarmctl service inspect redis
ID                : 89831rq7oplzp6oqcqoswquf2
Name              : redis
Replicas          : 6
Template
 Container
  Image           : redis:3.0.6

Task ID                      Service    Instance    Image          Desired State    Last State                Node
-------                      -------    --------    -----          -------------    ----------                ----
7947mlunwz2dmlet3c7h84ln3    redis      1           redis:3.0.6    RUNNING          RUNNING 34 seconds ago    node-3
56rcujrassh7tlljp3k76etyw    redis      2           redis:3.0.6    RUNNING          RUNNING 34 seconds ago    node-1
8l7bwrduq80pkq9tu4bsd95p4    redis      3           redis:3.0.6    RUNNING          RUNNING 36 seconds ago    node-2
3xb1jxytdo07mqccadt06rgi0    redis      4           redis:3.0.6    RUNNING          RUNNING 34 seconds ago    node-1
16aate5akcimsye9cp5xis1ih    redis      5           redis:3.0.6    RUNNING          RUNNING 34 seconds ago    node-2
dws408a3gz0zx0bygq3aj0ztk    redis      6           redis:3.0.6    RUNNING          RUNNING 34 seconds ago    node-3

如果要使用 Rolling Update ,每隔 10 秒更新 2 台機器

swarmctl service update redis --image redis:3.0.7 --update-parallelism 2 --update-delay 10s

swarmctl service inspect redis                                                                                         二  8/ 9 10:14:40 2016
ID                   : 94h7xat76kjd50as63f5qtsex
Name                 : redis
Replicas             : 4/6
Update Status
 State               : UPDATING
 Started             : 14 seconds ago
 Message             : update in progress
Template
 Container
  Image              : redis:3.0.7

Task ID                      Service    Slot    Image          Desired State    Last State                  Node
-------                      -------    ----    -----          -------------    ----------                  ----
3lbgomkdkfszohl0jkhy7bgwp    redis      1       redis:3.0.7    RUNNING          PREPARING 13 seconds ago    node-2
6db9gj8ssnfxhydtk00fgn93x    redis      2       redis:3.0.6    RUNNING          RUNNING 10 minutes ago      node-2
dgq6iwt0eh951gpe7bd7kxmcf    redis      3       redis:3.0.6    RUNNING          RUNNING 10 minutes ago      node-2
4rhy2jd7ecu968e0p51wohdn5    redis      4       redis:3.0.6    RUNNING          RUNNING 10 minutes ago      node-1
61sb7zev74d9jzh5vvud1vy4z    redis      5       redis:3.0.6    RUNNING          RUNNING 10 minutes ago      node-1
bdy30kw7zq8mytmo6yzdqjm5d    redis      6       redis:3.0.7    RUNNING          PREPARING 13 seconds ago    node-3

參考鏈結

[TIL] 在 MacOSX 上面透過 Docker 來跑 X11 原生視窗 App

原文

Bring Linux apps to the Mac Desktop with Docker

好處:

  • 某些 App 只出 Linux App 版本,卻沒有 MacOSX
  • 透過 sandbox 的方式執行程式

相關準備:

先裝 X11 Client - xquartz

brew install Caskroom/cask/xquartz

安裝 TCP/UDP mapping 工具 SOCAT

brew install socat

撰寫相關的 Dockerfile

vi Dockerfile

內容直接複製貼上…

開始吧

先下載這次範例程式 slack linux 版本

wget https://downloads.slack-edge.com/linux_releases/slack-desktop-2.1.0-amd64.deb

先在另外一個 teminal 跑 SOCAT

socat TCP-LISTEN:6000,reuseaddr,fork UNIX-CLIENT:\"$DISPLAY\"

記得不要關掉,這是對應 X11 Client/Server 對應的部分

編譯 Docker Image

docker build -t slack:2.1.0 .

跑起來吧

docker run -e DISPLAY=192.168.0.15:0 --name slack -d slack:2.1.0

[Golang] FOSDEM 2016: Building Data applications with Go: from Bloom filters to Data pipelines 心得

前提:

這一篇主要是看FOSDEM 2016 影片的簡單心得(投影片在這裡),順便把之前學的 Bloom Filter 複習一下. 這裡有我之前寫的程式

心得

這篇文章主要都介紹透過 Golang 來開發一個 Data Application . 從 Bloom Filter 到 Count-Min (透過 Hash 方式來存放資料,主要是記錄有出現多少次),到了 HyperLogLog (

關於 Bloom Filter

這是什麼?

Bloom Filter 是一個資料結構.主要是拿來能夠快速的確認一個數值有沒有存在的資料結構.具有以下特性:

  • 極小使用空間(由於不存在原本的 Value ) 只需要儲存 k 個資料結構就好.
  • 具有 可以判斷該數值絕對不存在 效率 ( 也就是不具有 False Nagative ,但是具有 False Postive ). 搜尋時間複雜度: \(O(n)\)

 使用場景:

  • 爬蟲可以記錄該網址是否有爬過
  • Google 惡意 URL 判斷
  • Canssandra 判斷該 Partition 是否有存放該數值

參考鏈結

[DevOps] Mesos 與 DC/OS 安裝學習

安裝 Mesos

在本地端透過 Vagrant 來安裝 Mesos

可以參考 Install Mesos via Vagrant

心得:

雖然官方文件相當的清楚,但是 Vagrant 本身就有一些雷要踩.不論是

加上如果沒注意到記憶體跟 CPU 極有可能在 Mesos 裡面會出現無法 Scale Task 的狀況.

關於雷的部分,講明白點….

  1. vagrant 的 private IP 不是每次 vagrant up 都會正確,經常會跑掉.得要 vagrant restart
  2. vagrant 彼此間要共享檔案,可以透過 /vagrant 這個資料夾.不過 /vagrant 只是 guest OS 跟 host OS 溝通的共享資料夾,如果兩個 guest OS 要共享,還是得透過 host OS. 然而,共享並不是 real time 而是啟動的時候由 host 帶過去到 guest 的 /vagrant,如果要複製 guest file 到 host 是可以即時,如果要另外一個 guest 拿到,就得要重啟 guest.

安裝 Mesos CLI 流程

  • 先安裝 pip
    • curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py
    • sudo python get-pip.py
  • 在來安裝 virtualenv
    • sudo pip install virtualenv
  • 安裝 mesos.cli
    • sudo pip install mesos.cli

在 GCE (Google Compute Engine) 安裝 DC/OS

參考這一篇

幾個東西要注意一下:

  • 記得要先登入 Google Cloud 帳戶 gcloud auth login
  • 在修改 https://github.com/dcos-labs/dcos-gce 安裝設定的 group_vars/all 的時候,記得以下資料:
    • bootstrap_public_ip: 要參考原先設定 bootstrap 的 IP.必須確認該 IP 的格式與必須要確認一開始的 bootstrap 機器跟你要建立的 masteragent 是同個 subnet.
    • subnet: 必須要設定一個新的 subnet (不能使用 default) 一定得建立一個新的 subnet .
      • 不然會出現錯誤代碼如下: The referenced subnetwork resource cannot be found
  • 修改 hosts 這個檔案裡面的資料:
    • master0 ip 必須要給訂一個內部 IP ,這裡不能隨便填,也不能寫 192.168.0.1 這類的.必須符合 GCP 的內部 IP 格式.建議.
      • 不然會出現錯誤代碼如下 Requested internal IP is outside the subnetwork CIDR range
  • 執行 ansible-playbook -i hosts install.yml 前必須確認自己有跑過 GCP 登入 gcloud auth login .不然會一直卡在權限不足的錯誤. [ 20160803 更新: 不僅僅是單獨使用者需要登入 gcloud ,你必須要確認該機器的 root 也有登入過 gcloud ]
  • 如果跑 ansible-playbook -i hosts install.yml 一直卡在 docker image 的問題,建議把 sudo docker images 清乾淨.
  • 如果在最後的設定出現任何錯誤,記得確認 master0 是否有被建立,一直會出現錯誤.
    • 錯誤代碼如下: The resource 'projects/YOUPROJECT/zones/asia-east1-c/instances/master0' already exists

Docker 1.12 更新:

20160803 由於 Docker 更新 1.12 後,目前無法順利安裝 DCOS .

Failed to start docker.service: Unit docker.socket failed to load: No such file or directory.

安裝 DC/OS 成功後的登入

直接打開 master0 的 external IP 在網頁上就會出現 DC/OS 的登入畫面.

參考安裝教學

[TIL] Survey about Chubby/Zookeeper and their fault tolerance

Difference between Chubby and Zookeeper

  • Consensus Algorithm:
    • Chubby: Using Paxos consensus algorithm
    • Zookeeper: Using ZAB (which is a modified algorithm of Paxos)
  • Access Path:
    • Chubby: Must through leader, follower don’t accept any command directly.
    • Zookeeper: Can accept any command from all follower, but will go back to leader.
  • Data Out-Of-Date:
    • Chubby: No rish on out-of-date, because all data fro Leader.
    • Zookeeper: Will get out-of-date data, if read without sync command. Sync command will force ask follower ask Leader first before return result to client.

The tolerance of Quorum-Backup and Primary-Backup:

Majority Quorums:

For Paxos or ZAB, A leader election need meet majority quorum ( n/2 +1 ) . It means if n=2f at most the service can tolerance f service failed at the same time. ( Quorum-Backup Replica ).

Primary-Backup

  • All data must confirm by all follower confirmed.
  • Much slower than “Quorum-Backup”.

Kafka ISR (In-Sync Replica)

In-Sync Replica is a backup policy which similar with Primary-Backup but in sychronous replication.

Server store two copy of replica data:

  • ISR (The latest one) store in Leader.
  • Follower store replica data might later than ISR.
  • Use Zookeeper to update all follower replica data from ISR.

In this case, if total server is n=f+1, we could tolerance number of f service failed. ( Primary-Backup replica )

trade-off:

  • Follower data critical late than leader, but in Kafka this case might be acceptable.
  • If follower cannot catch up data from leader, will drop ISR by leader.
  • If failed on leader, it has high possibility lost data if next leader. (Choose leader from non-ISR list, because all follower in ISR leave behind.)
  • Use Zookeeper to store ISR will introduce split brain issue. (two majority group issue).

Refer to Kafka ISR

Reference

[DevOps] 學習 Kubernetes 筆記之二: 透過 Google Compute Engine (Google Cloud) 來繼續學習 Kubernetes

使用 Google Kubernetes Engine (Container Enginer)

由於剛學完 Udacity 上面的 “Scalable Microservices with Kubernetes” 接下來的例子主要是課堂上有教的部分.

先使用 Container Enginer 來建立一個新的 Container Cluster

(P.S. 你需要記住你的 GKE 名稱 (ex: mygke))

透過 kubectl 來連接你的 Cotainer Cluster

gcloud container clusters get-credentials mygke

建立一個具有 TLS 的 nginx 連線

建立 TLS

確認你已經有 key

ls tls
> ca-key.pem ca.pem     cert.pem   key.pem

使用 kubectl 建立一個 tls secret

kubectl create secret generic tls-certs --from-file=tls/
> secret "tls-certs" created

顯示 tls secret 的內容

kubectl describe secrets tls-certs

> Name:		tls-certs
> Namespace:	default
> Labels:		<none>
> Annotations:	<none>
>
> Type:	Opaque
> 
> Data
> ====
> ca-key.pem:	1679 bytes
> ca.pem:		1180 bytes
> cert.pem:	1249 bytes
> key.pem:	1675 bytes

建立一個給 nginx proxu 的 config map

kubectl create configmap nginx-proxy-conf --from-file=nginx/proxy.conf

> configmap "nginx-proxy-conf" created

詳細了解 nginx-proxy-conf configmap

kubectl describe configmap nginx-proxy-conf

> Name:		nginx-proxy-conf
> Namespace:	default
> Labels:		<none>
> Annotations:	<none>

> Data
> ====
> proxy.conf:	176 bytes

建立一個 ngix 的 pod file

準備一下 secure-monolith pod file

vi  secure-monolith.yaml

詳細內容如下:

apiVersion: v1
kind: Pod
metadata:
  name: "secure-monolith"
  labels:
    app: monolith
spec:
  containers:
    - name: nginx
      image: "nginx:1.9.14"
      lifecycle:
        preStop:
          exec:
            command: ["/usr/sbin/nginx","-s","quit"]
      volumeMounts:
        - name: "nginx-proxy-conf"
          mountPath: "/etc/nginx/conf.d"
        - name: "tls-certs"
          mountPath: "/etc/tls"
    - name: monolith
      image: "udacity/example-monolith:1.0.0"
      ports:
        - name: http
          containerPort: 80
        - name: health
          containerPort: 81
      resources:
        limits:
          cpu: 0.2
          memory: "10Mi"
      livenessProbe:
        httpGet:
          path: /healthz
          port: 81
          scheme: HTTP
        initialDelaySeconds: 5
        periodSeconds: 15
        timeoutSeconds: 5
      readinessProbe:
        httpGet:
          path: /readiness
          port: 81
          scheme: HTTP
        initialDelaySeconds: 5
        timeoutSeconds: 1
  volumes:
    - name: "tls-certs"
      secret:
        secretName: "tls-certs"
    - name: "nginx-proxy-conf"
      configMap:
        name: "nginx-proxy-conf"
        items:
          - key: "proxy.conf"
            path: "proxy.conf"

透過檔案來建立 pod

kubectl create -f secure-monolith.yaml

顯示 pod 的狀態

kubectl get pods secure-monolith

> NAME              READY     STATUS    RESTARTS   AGE
> secure-monolith   2/2       Running   0          13m

設定你本機電腦的 port 連接到該 pod

kubectl port-forward secure-monolith 10443:443

透過 curl 來做 https 連接測試

curl --cacert tls/ca.pem https://127.0.0.1:10443

> {"message":"Hello"}

顯示更多詳細的 log

kubectl logs -c nginx secure-monolith

> 127.0.0.1 - - [22/Jul/2016:16:56:21 +0000] "GET / HTTP/1.1" 200 20 "-" "curl/7.43.0" "-"

參考安裝教學