
As a cloud company, we usually build our container cluster on cloud platforms such as GCP or AWS. but sometimes we need to think about how to take to go solution to our customer. So, here comes a challenge - Scale-in our cloud platform and put into pocket (mmm I mean “minikube”)
Here is a simple example which you (might) build in GKE.
DB: MongoDB with stateful set binding with google persistent disk. Use stateful set from (“Running MongoDB on Kubernetes with StatefulSets”) as an exampleWeb service(fooBar): a golang application which accesses mongo DB with the load balancer. Because fooBar is proprietary application which fooBar image store in GCR (Google Cloud Registry) not in docker hub.We will just list some note to let you know any tip or note you migrate your service to minikube.
“Here” is a good article to understand StatefulSet. But however it use some resource you could not use in minikube, such as:
Let me put all yaml setting and give more detail.
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: mongo
spec:
serviceName: "mongo"
replicas: 3
template:
metadata:
labels:
role: mongo
environment: test
spec:
terminationGracePeriodSeconds: 10
containers:
- name: mongo
image: mongo
command:
- mongod
- "--replSet"
- rs0
- "--smallfiles"
- "--noprealloc"
ports:
- containerPort: 27017
volumeMounts:
- name: mongo-persistent-storage
mountPath: /data/db
- name: mongo-sidecar
image: cvallance/mongo-k8s-sidecar
env:
- name: MONGO_SIDECAR_POD_LABELS
value: "role=mongo,environment=test"
volumeClaimTemplates:
- metadata:
name: mongo-persistent-storage
annotations:
volume.beta.kubernetes.io/storage-class: "fast"
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 100Gi
The volumeClaimTemplates provide stable storage using PersistentVolumes provisioned by a PersistentVolume Provisioner. (Use “Google Persistent Disk” on GCP)
The “fast” will specific to use google persistent disk with “SSD” performance. So, in this case our Kubernetes volume claim will specific to google cloud persistent disk.
There are two solutions for this:
hostPath directly.In the second case, we could still use volumeClaimTemplates and just remove volume provider persistent disk annotation. It will select the best for our system. (for now, it is hostPath.
volumeClaimTemplates:
- metadata:
name: minikube-clain
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 10Gi
When you deploy GKE, you usually use GCR (Google Cloud Registry) to storage your docker container. If you want to move your project from GKE to minikube, you have solutions as follows:
--insecure-registry if you use the self-signed key.minikube addons configure registry-credskubectl delete secret gcr
kubectl create secret docker-registry gcr \
--docker-server=https://asia.gcr.io \
--docker-username=oauth2accesstoken \
--docker-password="$(gcloud auth print-access-token)" \
[email protected]
Here is the document from minikube.
Ok, that’s done. If you are wondering how to connect to your web service fooBar. You can just call minikube service fooBar.
It is very useful when you don’t know what’s happening on your pod, especially for networks storage or network.
cat <<EOF | kubectl create -f -
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: busybox
spec:
selector:
matchLabels:
app: busybox
replicas: 1
template:
metadata:
labels:
app: busybox
spec:
containers:
- name: busybox
image: busybox
command:
- sleep
- "86400"
EOF
kubectl exec -it <BUSYBOX_POD_NAME> -- nslookup redis.default
example:
kubectl exec -it busybox-7fc4f6df6-fhxnp -- nslookup redis.default
Remember to remove the setting file when you uninstall minikube.
minikube delete
rm -rf ~/.minikube
It might happen as follow reasons:
Starting local Kubernetes v1.8.0 cluster...
Starting VM...
E1124 22:16:44.080926 87887 start.go:150] Error starting host: Temporary Error: Error configuring auth on host: Temporary Error: ssh command error:
Solution: rm -rf ~/.minikube
sudo su -
cd /tmp
wget https://archive.apache.org/dist/kafka/0.8.2.2/kafka_2.9.1-0.8.2.2.tgz
tar -zxvf kafka_2.9.1-0.8.2.2.tgz -C /usr/local/
cd /usr/local/kafka_2.9.1-0.8.2.2
sbt update
sbt package
cd /usr/local
ln -s kafka_2.9.1-0.8.2.2 kafka
echo "" >> ~/.bash_profile
echo "" >> ~/.bash_profile
echo "# KAFKA" >> ~/.bash_profile
echo "export KAFKA_HOME=/usr/local/kafka" >> ~/.bash_profile
source ~/.bash_profile
echo "export KAFKA=$KAFKA_HOME/bin" >> ~/.bash_profile
echo "export KAFKA_CONFIG=$KAFKA_HOME/config" >> ~/.bash_profile
source ~/.bash_profile
$KAFKA/zookeeper-server-start.sh $KAFKA_CONFIG/zookeeper.properties
$KAFKA/kafka-server-start.sh $KAFKA_CONFIG/server.properties
(in your kafka path)
> $KAFKA/bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test`
> $KAFKA/in/kafka-topics.sh --list --zookeeper localhost:2181
test

spf13/cobra is a great package if you want to write your own console app. Even Kubernetes console use cobra to develop it console app.
Let’s use kubectl as simple example it support.
kubectl get nodeskubectl create -f ...RESOURCEkubectl delete -f ...RESOURCETake those command as an example, there are some sub-command as follow:
getcreatedeleteHere is how we add that sub-command to your app. (ex: kctl)
cobra init in your repo. It will create /cmd and main.go.cobra add get to add sub-command get
kctl get to get prompt from the console which you already call this sub-command.create and delete. You will see related help in kctl --help.Cobra could use console mode to add sub-command, but you need to add nested command manually.
ex: we need add one for kctl get nodes.
nodes.go in /cmdpackage cmd
import (
"fmt"
"github.com/spf13/cobra"
)
func init() {
getCmd.AddCommand(nodesCmd)
}
var nodesCmd = &cobra.Command{
Use: "nodes",
Short: "Print the nodes info",
Long: `All software has versions. This is Hugo's`,
Run: func(cmd *cobra.Command, args []string) {
fmt.Println("get nodes is call")
},
}
getCmd.AddCommand(nodesCmd).這篇文章是講解到關於 git rebase 的部分,講著就提到一些 git 技巧.也提到該如何把不小心 commit 的檔案從 git 記錄內徹底刪除的方法.
Before start to rebase:
git fetch origin
Pull latest code and start rebase: (ex: rebase to master)
git rebase origin/master
Start rebase
git rebase --continue
… modify code.
git add your_changed_code
Force push because tree different:
git push -f -u origin HEAD
-u: for upstream sort term.-f: force (because you rebase your code history)Rebase with interactive
git rebase -i (interactive) origin/develop
It will entry Select your change and make as squash, pick.
git log stat
git reset HEAD~ (rollback last change)
git log --stat --decorate
git checkout -t origin/evan/test1
git log --stat --decorate
git fetch origin
git rebase -i origin/develop
vim .gitignore
git add -v .gitignore
git rebase --continue
git status
git submodule update
git log --stat
git push -f origin HEAD
git rebase {from commit} --onto origin/develop
List logs before 33322
git log 33322~
#!/bin/bash
git rev-list --objects --all | git cat-file --batch-check='%(objecttype) %(objectname) %(objectsize) %(rest)' | awk '/^blob/ {print substr($0,6)}' | sort --numeric-sort --key=2 | gcut --complement --characters=13-40 | gnumfmt --field=2 --to=iec-i --suffix=B --padding=7 --round=nearest
Have to do this before your filter branch.
git clone --mirror
#!/bin/bash
set -e
# git filter-branch --index-filter 'git rm --cached --ignore-unmatch filename' HEAD
git filter-branch --index-filter "git rm -r -f --cached --ignore-unmatch $*" --prune-empty --tag-name-filter cat -- --all
git for-each-ref --format="%(refname)" refs/original/ | xargs -n 1 git update-ref -d
git reflog expire --expire=now --all
git reset --hard
git gc --aggressive --prune=now
git remote update --prune
git gc --aggressive --prune=now
git branch --merged | grep -E -v 'master|rc|develop' | xargs -I{} git branch -d {}

總算完成 deeplearning.ai 第一階段課程 “Neural Networks and Deep Learning”
真的相當有趣的基礎課程,基本上上完了就等於把o’reilly deep learning 的整本書都上完.並且有實際透過 numpy 寫完部分的 DNN 的分類器的作業.
本來就想把 Deep Learning 學一下, 因緣際會下看到這一篇 Coursera 學習心得 試讀了七天,除了提供 Jupyter Notebook 之外,作業也都相當有趣,就開始繼續學了. 目前進度到 Week2 相當推薦有程式設計一點點基礎就可以來學.裡面的數學應該還好. 學習的過程中還可以學會 Python 裡面的 numpy 如何使用,因為裡面主要就是要教導你如何使用 numpy 來兜出 Neural Network .
那麼簡單的式子可以表達成以下的方式:
\[Z^{[1]} = W^{[1]} * X + b^{[1]} \\ a^{[1]} = g^{[1]} * (Z^{[1]}) \\ Z^{[2]} = W^{[2]} * a^{[1]} + b^{[2]} \\ a^{[2]} = g^{[2]}*(Z^{[2]})\]…
其中別忘記 \(X -> a^{[0]}\)
透過這樣,可以簡化成:
\[Z^{[l]} = W^{[l]} \\ a^{[l-1]} + b^{[l]} \\ a^{[l]} = g^{[l]}(Z^{[2]})\]Where \(l = 1, 2, ... L\)
用來決定 \(w\) 與 \(b\) 的都算是 hyperparameter ,舉凡:

透過這張圖,其實有不少關於 Deep Neural Network 可以談的:
\(W^l = dim(l, l-1)\) 舉個例子是:
推倒的式子為: Z1 = W1 * X + B1 [3,1] = W1 * X[2,1] + B1 (先假設 B1 = 0) => W1 * [2,1] = [3,1] => W1 –> [3, 2] –> [3, 2] * [2, 1] –> [3, 1] => W1 * X + B1 => B1.shape = W1*B1 = [3, 1]
Z= WX + bcache = A, W, bZ, cache as wlluse sigmoid or relu to transform your Z to activation (denoe: A).
use A_prev (A from previous layer) to get Z and transform to A.
relu activation function.sigmoid as output layer activation