[DevOps] 學習 Kubernetes 筆記之二: 透過 Google Compute Engine (Google Cloud) 來繼續學習 Kubernetes

使用 Google Kubernetes Engine (Container Enginer)

由於剛學完 Udacity 上面的 “Scalable Microservices with Kubernetes” 接下來的例子主要是課堂上有教的部分.

先使用 Container Enginer 來建立一個新的 Container Cluster

(P.S. 你需要記住你的 GKE 名稱 (ex: mygke))

透過 kubectl 來連接你的 Cotainer Cluster

gcloud container clusters get-credentials mygke

建立一個具有 TLS 的 nginx 連線

建立 TLS

確認你已經有 key

ls tls
> ca-key.pem ca.pem     cert.pem   key.pem

使用 kubectl 建立一個 tls secret

kubectl create secret generic tls-certs --from-file=tls/
> secret "tls-certs" created

顯示 tls secret 的內容

kubectl describe secrets tls-certs

> Name:		tls-certs
> Namespace:	default
> Labels:		<none>
> Annotations:	<none>
>
> Type:	Opaque
> 
> Data
> ====
> ca-key.pem:	1679 bytes
> ca.pem:		1180 bytes
> cert.pem:	1249 bytes
> key.pem:	1675 bytes

建立一個給 nginx proxu 的 config map

kubectl create configmap nginx-proxy-conf --from-file=nginx/proxy.conf

> configmap "nginx-proxy-conf" created

詳細了解 nginx-proxy-conf configmap

kubectl describe configmap nginx-proxy-conf

> Name:		nginx-proxy-conf
> Namespace:	default
> Labels:		<none>
> Annotations:	<none>

> Data
> ====
> proxy.conf:	176 bytes

建立一個 ngix 的 pod file

準備一下 secure-monolith pod file

vi  secure-monolith.yaml

詳細內容如下:

apiVersion: v1
kind: Pod
metadata:
  name: "secure-monolith"
  labels:
    app: monolith
spec:
  containers:
    - name: nginx
      image: "nginx:1.9.14"
      lifecycle:
        preStop:
          exec:
            command: ["/usr/sbin/nginx","-s","quit"]
      volumeMounts:
        - name: "nginx-proxy-conf"
          mountPath: "/etc/nginx/conf.d"
        - name: "tls-certs"
          mountPath: "/etc/tls"
    - name: monolith
      image: "udacity/example-monolith:1.0.0"
      ports:
        - name: http
          containerPort: 80
        - name: health
          containerPort: 81
      resources:
        limits:
          cpu: 0.2
          memory: "10Mi"
      livenessProbe:
        httpGet:
          path: /healthz
          port: 81
          scheme: HTTP
        initialDelaySeconds: 5
        periodSeconds: 15
        timeoutSeconds: 5
      readinessProbe:
        httpGet:
          path: /readiness
          port: 81
          scheme: HTTP
        initialDelaySeconds: 5
        timeoutSeconds: 1
  volumes:
    - name: "tls-certs"
      secret:
        secretName: "tls-certs"
    - name: "nginx-proxy-conf"
      configMap:
        name: "nginx-proxy-conf"
        items:
          - key: "proxy.conf"
            path: "proxy.conf"

透過檔案來建立 pod

kubectl create -f secure-monolith.yaml

顯示 pod 的狀態

kubectl get pods secure-monolith

> NAME              READY     STATUS    RESTARTS   AGE
> secure-monolith   2/2       Running   0          13m

設定你本機電腦的 port 連接到該 pod

kubectl port-forward secure-monolith 10443:443

透過 curl 來做 https 連接測試

curl --cacert tls/ca.pem https://127.0.0.1:10443

> {"message":"Hello"}

顯示更多詳細的 log

kubectl logs -c nginx secure-monolith

> 127.0.0.1 - - [22/Jul/2016:16:56:21 +0000] "GET / HTTP/1.1" 200 20 "-" "curl/7.43.0" "-"

參考安裝教學

[DevOps] 學習 Kubernetes 筆記之一: 透過單機版的 Kubernetes (miniKube) 來玩 K8S

miniKube 單機版的 Kubernetes

miniKube 是 Google 發布可以在單機上面跑 Kubernetes 的工具,安裝跟使用都相當簡單.由於會在本地跑一個 VM ,所以也不用擔心會被 Google Cloud 不小心付費的問題.

安裝

curl -Lo minikube https://storage.googleapis.com/minikube/releases/v0.6.0/minikube-darwin-amd64 && chmod +x minikube && sudo mv minikube /usr/local/bin/

啟動

minikube start

連接進該 VM

minikube ssh

打開並且顯示 minikube 的 dashboard

minikube dashboard

接下來,安裝 Kubernetes

安裝 Google Cloud SDK

curl https://sdk.cloud.google.com | bash

安裝好之後,會有 gcloud, gsutil 但是還需要安裝 kubectl

透過 Google Cloud SDK 安裝 Kubernetes

gcloud components install kubectl

簡單 Tutorial

# Startup miniKube
minikube start

# Create a deployment
kubectl run hello-minikube --image=gcr.io/google_containers/echoserver:1.4 --port=8080

# Expose it
kubectl expose deployment hello-minikube --type=NodePort

# Check pod
kubectl get pod

# Scale 
kubectl scale deployment hello-minikube --replicas=4

# Direct link to hello-minikube 
curl $(minikube service hello-minikube --url)

How to Rolling Update on Kubernetes

kubectl edit deployment hello-minikube

Update your service version and apply it (save it back).

You can also update this file via minikube dashboard

  • Edit “deployment”
  • Update “spec”->”spec”-> “image” version.

Kubernetes 與 Docker Swarm Mode 差異

  • Kubernetes 的 Pods 就是 Docker Swarm 的 Service
  • 對於 Rolling Update:
    • Kubernetes : 先起來新的 Pod 是新的版本,然後關閉舊的 Pod. 也就是原先有 3 個 Pod ,為了要 Rolling Update 於是產生另外 3 個新的 Pod .等到穩定後,把舊的 3 個 Pod 關閉.
    • Docker Swarm: 透過 Load Balancer 把沒有用的先換成新的.
  • 對於 Routing Mesh :
    • Kubernetes : 並沒有這個功能,因為對外需要透過 Expose
    • Docker Swarm : 每個 Node 都具有 Routing Mesh ,也就是都可以連接到 Service

參考安裝教學

[TIL] Learning note about Docker Swarm Mode

Start Docker Swarm Mode

Docker Swarm Mode is specific for Docker Swarm Version 2 which only enable after Docker 1.12. It is cluster management system for Docker.

Install Docker-Machine:

Better to prepare at least 3 docker-machine for swarm node.

If you use Docker for Mac or Docker for Windows beta, you still need install `docker-machine.

  • Install docker-machine:
curl -L https://github.com/docker/machine/releases/download/v0.7.0/docker-machine-`uname -s`-`uname -m` > /usr/local/bin/docker-machine && \
chmod +x /usr/local/bin/docker-machine

Run Docker Swarm

Start to create 3 docker machine for cluster management.

docker-machine create --driver virtualbox v1
docker-machine create --driver virtualbox v2
docker-machine create --driver virtualbox v3

Init cluster leader in v1

docker-machine ssh v1

For example if your v1 node IP address is 192.168.99.110.

docker swarm init --listen-addr 192.168.99.110:2377 --advertise-addr 192.168.99.110

Init Other Worker Node v2, v3

Let v2, v3 join cluster as nodes.

Login and Control v2.

docker-machine ssh v2
docker swarm join --token SWMTKN-1-3h0ndq6j0agkl1inb7sd9gnrk1va4e0sggw74jsaj7xkx75c7n-31coul06qcdb7g411ww8jnurw 192.168.99.110:2377

> This node joined a swarm as a worker.

Login and Control v3.

docker-machine ssh v2

docker swarm join --token SWMTKN-1-3h0ndq6j0agkl1inb7sd9gnrk1va4e0sggw74jsaj7xkx75c7n-31coul06qcdb7g411ww8jnurw 192.168.99.110:2377

> This node joined a swarm as a worker.

Start Create Service in Swarm Mode.

docker-machine ssh v1
docker service create --name vote -p 8080:80 instavote/vote

Check service if exist

docker service ls

ID            NAME  REPLICAS  IMAGE           COMMAND
7lwioo4526w7  vote  1/1       instavote/vote

Check service if exist

docker service ps vote


ID                         NAME    IMAGE           NODE  DESIRED STATE  CURRENT STATE            ERROR
2peq9y4gv2ba3tijnp5vnfuj5  vote.1  instavote/vote  v1    Running        Running 11 minutes ago
b2qpn2e5xhy6hjdvelxjpqt74  vote.2  instavote/vote  v2    Shutdown       Shutdown 21 seconds ago
cjnd7rq37ldmvoq0id8tba7hp  vote.3  instavote/vote  v2    Shutdown       Shutdown 21 seconds ago

Scale it

docker service scale vote=3

You will see every service will allocate one service.

docker service ps vote

ID                         NAME        IMAGE           NODE  DESIRED STATE  CURRENT STATE                ERROR
2peq9y4gv2ba3tijnp5vnfuj5  vote.1      instavote/vote  v1    Running        Running 13 minutes ago
4x5kihy8z89mj9u2vyne2x3ec  vote.2      instavote/vote  v2    Running        Running 8 seconds ago
9ins324mae19gpzsli925ivtr  vote.3      instavote/vote  v3    Running        Preparing 11 seconds ago

If you try to reload it, the container ID will change. It is Load Balancer support for docker swarm.

Service Update

docker service update --image instavote/vote:movies vote

docker service ls

ID            NAME  REPLICAS  IMAGE                 COMMAND
7lwioo4526w7  vote  2/3       instavote/vote:movies

Rolling Update

docker service update vote --image instavote/vote:movies --update-parallelism 2 --update-delay 10s

Rolling update at most two server, once delay 10 seconds.

Global Service

docker service create --mode=global --name prometheus prom/prometheus

Fault Tolerance**

You can shutdown any server node, it will auto recover scale to other remain servers.

Note: If you don’t include secret and --ca-hash when worker join to master. The Routing Mesh doesn’t work correctly.

New Feature in Docker Swarm Mode

Routing Mesh

Once you one a service in any one node in this cluster, you can connect to any node to get your service.

ex:

Assume you have three machine v1 is leader and v2, v3, v4 is worker node.

docker service create --name vote -p 8080:80 instavote/vote
docker service tasks vote

Once your create a 8080 port service in this cluster. All nodes will listen 8080 port for this service.

No Matter Docker Swarm arrange which node to run vote sercice (on v2, v3 or v4.)

You can call any node to get this service.

http://v1:8080
http://v2:8080
http://v3:8080
http://v4:8080

The worker node will use gossip protocol to ask all relevant node to retrieval correct node and response directly.

Built-in Load Balancer

Built-in layer 4 load balancing service.

For example:

  • If you have node v1, v2, v3, v4
  • Run and Scale vote to 4 docker service scale vote=4
  • Once you connect to any node, the container ID will change. (Auto Load Balancer)

Note for Docker 1.12 GM version:

  • After Docker 1.12 RC to 1.12 GM version, there is no need for start a service. All service will auto start after your create it. (No docker service task $SERVICE )
  • If you have multiple network card, you might need specific --advertise-addr when your init docker swarm leader.

Under the hood

Swarm Mode Flow:

  • Manager: docker swarm init --listen-address=xxxx
    • Create TLS Root key CA
  • Worker: docker swarm join xxx
    • Manager create new key-pair for this worker
    • Key-pair signed bu Root CA
    • Deliver key to worker via TLS

Role and Responsibility

  • Manager:
    • Response for orchestration
    • Create TLS Root CA
    • Perform health-check for each worker
    • Using Raft consensus algorithm to sync status and command betwen managers.
    • Using memory to storage all data, no extra K-V DB.
  • Worker:
    • Using Gossip for job detribution speed up worker node communication.

Reference

[TIL] Learning note about Cassandra

Keyspace

  • keyspace == database in SQL
CREATE KEYSPACE KEYSPACE_NAME
  WITH REPLICATION = { 'class' : 'SimpleStrategy', 'replication_factor' : 3 };

then use KEYSPACE_NAME; in cqlsh before you create/alter/drop/select tables.

Special Data Operation in Cassandra (Set/List/Map/Nested Type/Frozen/Tuple/JSON)

Set data

Table schema for set.

CREATE TABLE users (
  user_id text PRIMARY KEY,
  first_name text,
  last_name text,
  emails set<text>
);

Insert data into table

INSERT INTO users (user_id, first_name, last_name, emails)
  VALUES('frodo', 'Frodo', 'Baggins', {'[email protected]', '[email protected]'});

List data

Table schema

CREATE TABLE users (
  user_id text PRIMARY KEY,
  first_name text,
  last_name text,
  top_places list<text>
);

Data Insertion.

INSERT INTO users (user_id, first_name, last_name, emails)
  VALUES('frodo', 'Frodo', 'Baggins', [ 'rivendell', 'rohan' ]);

Map Data

Table schema

CREATE TABLE users (
  user_id text PRIMARY KEY,
  first_name text,
  last_name text,
  todo map<timestamp, text>;
);

Data Insertion.

INSERT INTO users (user_id, first_name, last_name, emails)
  VALUES('frodo', 'Frodo', 'Baggins', { '2012-9-24' : 'enter mordor',
  '2014-10-2 12:00' : 'throw ring into mount doom' });

Nested Type

At first create a type of Address2.

CREATE TYPE address2 (
      street text,
      city text
  );

Using Address2 as a type and create Profile. For user defined type need to use frozen<address2>.

CREATE TYPE profile (
      mail set<text>,
      phone set<int>,
      address frozen<address2>
  );

Create User_Data using Profile, the same using frozen<profile> for user defined type.

CREATE TYPE user_data (
      username text,
      userage int,
      userprofile frozen<profile>
  );

Finally, create another User_Profile2.

CREATE TABLE user_profiles2 (
      id int PRIMARY KEY,
      data frozen<user_data>
  );

Insert data with JSON format.

INSERT INTO user_profiles2(id, data)
  VALUES (1,
         { 
            username: 'user', 
            userage: 20,
            userprofile: {
                mail: {'[email protected]', '[email protected]'},
                phone: {1234567, 9876543},
                address: {      
                    street : 'Wu fu Rd.',
                    city : 'KAOHSIUNG CITY'
                }    
            }
         }
  );    

Select it:

 id | data
----+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
  1 | {username: 'user', userage: 20, userprofile: {mail: {'[email protected]', '[email protected]'}, phone: {1234567, 9876543}, address: {street: 'Wu fu Rd.', city: 'KAOHSIUNG CITY'}}}

Only get partial data:

select data.userprofile.address from user_profiles2 where id = 1;


 data.userprofile.address
-----------------------------------------------
 {street: 'Wu fu Rd.', city: 'KAOHSIUNG CITY'}

Frozen

For User-Defined type need to use frozen when you need specific in table column.

CREATE TABLE mykeyspace.users (
  id uuid PRIMARY KEY,
  name frozen <fullname>,
  direct_reports set<frozen <fullname>>,     // a collection set
  addresses map<text, frozen <address>>     // a collection map
);

Note:: Non-Frozen data could not be PK.

Note: User-Defined type, so please check your field or you will get error:

"Non-frozen User-Defined types are not supported, please use frozen<>"

Tuple data (Cassandra 2.1 Supproted)

CREATE TABLE collect_things (
  k int PRIMARY KEY,
  v <tuple<int, text, float>>
);

INSERT INTO collect_things (k, v) VALUES(0, (3, 'bar', 2.1));

SELECT * FROM collect_things;

 k | v
---+-----------------
 0 | (3, 'bar', 2.1)
 

JSON operation (Cassandra 2.2 Supproted)

Table schema

CREATE TABLE users (
    id text PRIMARY KEY,
    age int,
    state text
);

Insert data as normal CQL

INSERT INTO users (id, age, state) VALUES ('user123', 42, 'TX');

Insert data as JSON.

INSERT INTO users JSON '{"id": "user123", "age": 42, "state": "TX"}';

Gotchas

  • If you want to SELECT * FROM users WHERE user_id =?, you must set user_id using cql create index on users(user_id); as indexing or your will get error "No secondary indexes on the restricted columns support the provided operators:"
  • There is no way to change PK, just drop original table and re-create a new one.
  • Primary key could not over length: 65535.
  • We could not update PK columns, could not search only one of composit PK (must be all).
  • ORDER BY with 2ndary indexes is not supported, So you cannot order by PK and Where in Index Value.
  • ORDER BY is only supported when the partition key is restricted by an EQ or an IN.

Reference:

[TIL] 如何快速在 Google Cloud Platform 上面架設一個 Docker 環境

Using docker-machine is another way to host your google compute instance with docker.

docker-machine create \
  --driver google \
  --google-project $PROJECT \
  --google-zone asia-east1-c \
  --google-machine-type f1-micro $YOUR_INSTANCE

If you want to login this machine on google cloud compute instance, just use docker-machine ssh $YOUR_INSTANCE

Refer to docker machine driver gce

Note:

  • The progress of docker-machine might be slow because it need connect to Google Compute Engine.
  • Do not use multiple docker-machine creation, because docker need to write related information on docker.sock. You might occur error on Wrapper Docker Machine process exiting due to closed plugin server (unexpected EOF).
  • Using docker-machine to create local vbox vm will install docker 1.12 rc1, but on GCP will using 1.11. So, we could use docker swarm mode :(

[TIL] Google Cloud Storage Quickstart

Google Cloud Storage:

扁平的檔案結構,其主要架構為:

  • Bucket (ex: gs://example) 就像是 git repo 一樣.
  • Object 就像是檔案一樣.

存取方式

這裡先介紹透過 Google Cloud SDK 來安裝,透過以下方式可以安裝:

curl https://sdk.cloud.google.com | bash

建立 Bucket

gsutil mb gs://YOUR_BUCKET_NAME

複製檔案過去

gsutil cp SOURCE DESTNATION

根據這樣指令架構.

檔案放上 Google Cloud Storage:

gsutil cp ./111.txt gs://YOUR_BUCKET_NAME

拉回本地端:

gsutil cp gs://YOUR_BUCKET_NAME/111.txt ./

參考: