[TIL][markdown] 好用的編輯器 - typora


Typora 網站

今日內部教育訓練的時候,首席(筆頭?) 唯一問的問題,竟然是請問展示使用的 markdown 編輯器是什麼? 一問之下,才知道原來有另外一個好用的 markdown 編輯器 typora

  • 動態檢視 (類似 dropbox paper 體驗) (特愛
  • 支援 Latex preview, 支援 Latex preview, 支援 Latex preview (媽呀,真棒
  • 有 Tab 的支援
  • outline 顯示模式
  • 聽說支援 Windows, Ubuntu, OSX

於是我馬上把 MacDown 換掉了 XDDD ~~大推…

[TIL][Kubernetes] How to move your GKE into minikube


As a cloud company, we usually build our container cluster on cloud platforms such as GCP or AWS. but sometimes we need to think about how to take to go solution to our customer. So, here comes a challenge - Scale-in our cloud platform and put into pocket (mmm I mean “minikube”)

An example GKE service

Here is a simple example which you (might) build in GKE.

  • DB: MongoDB with stateful set binding with google persistent disk. Use stateful set from (“Running MongoDB on Kubernetes with StatefulSets”) as an example
  • Web service(fooBar): a golang application which accesses mongo DB with the load balancer. Because fooBar is proprietary application which fooBar image store in GCR (Google Cloud Registry) not in docker hub.

How to migrate your service from GKE to minikube:

We will just list some note to let you know any tip or note you migrate your service to minikube.

1. Migrate Persistent Volume to minikube.

Here” is a good article to understand StatefulSet. But however it use some resource you could not use in minikube, such as:

Let me put all yaml setting and give more detail.

apiVersion: apps/v1beta1
kind: StatefulSet
  name: mongo
  serviceName: "mongo"
  replicas: 3
        role: mongo
        environment: test
      terminationGracePeriodSeconds: 10
        - name: mongo
          image: mongo
            - mongod
            - "--replSet"
            - rs0
            - "--smallfiles"
            - "--noprealloc"
            - containerPort: 27017
            - name: mongo-persistent-storage
              mountPath: /data/db
        - name: mongo-sidecar
          image: cvallance/mongo-k8s-sidecar
            - name: MONGO_SIDECAR_POD_LABELS
              value: "role=mongo,environment=test"
  - metadata:
      name: mongo-persistent-storage
        volume.beta.kubernetes.io/storage-class: "fast"
      accessModes: [ "ReadWriteOnce" ]
          storage: 100Gi

The volumeClaimTemplates provide stable storage using PersistentVolumes provisioned by a PersistentVolume Provisioner. (Use “Google Persistent Disk” on GCP)

The “fast” will specific to use google persistent disk with “SSD” performance. So, in this case our Kubernetes volume claim will specific to google cloud persistent disk.

How we migrate to Google Persistent Disk volume into minikube?

There are two solutions for this:

  • Use hostPath directly.
  • Still using volume claim but remove google persistent disk annotation.

In the second case, we could still use volumeClaimTemplates and just remove volume provider persistent disk annotation. It will select the best for our system. (for now, it is hostPath.

    - metadata:
        name: minikube-clain
        accessModes: [ "ReadWriteOnce" ]
            storage: 10Gi

2. Using GCR in minikube

When you deploy GKE, you usually use GCR (Google Cloud Registry) to storage your docker container. If you want to move your project from GKE to minikube, you have solutions as follows:

  1. Build your own docker registry. But you will need handle follow things:
    • Use --insecure-registry if you use the self-signed key.
    • Use public DNS to request CA key (It will against we want to move into minikube)
  2. Use minikube to connect to GCR directly.
    • Minikube has add-on minikube addons configure registry-creds
    • Use temproary token solution (around ~30 mins), here is the reference.
kubectl delete secret gcr

kubectl create secret docker-registry gcr \
    --docker-server=https://asia.gcr.io \
    --docker-username=oauth2accesstoken \
    --docker-password="$(gcloud auth print-access-token)" \
    [email protected]

Here is the document from minikube.

Launch your service

Ok, that’s done. If you are wondering how to connect to your web service fooBar. You can just call minikube service fooBar.


using busybox to debug

It is very useful when you don’t know what’s happening on your pod, especially for networks storage or network.

cat <<EOF | kubectl create -f -
apiVersion: extensions/v1beta1
kind: Deployment
  name: busybox
      app: busybox
  replicas: 1
        app: busybox
      - name: busybox
        image: busybox
          - sleep
          - "86400"

kubectl exec -it <BUSYBOX_POD_NAME> -- nslookup redis.default


kubectl exec -it busybox-7fc4f6df6-fhxnp -- nslookup redis.default

Destroy & Cleanup the minikube

Remember to remove the setting file when you uninstall minikube.

minikube delete
rm -rf ~/.minikube

Cannot start minikube

It might happen as follow reasons:

  1. minikube version change.
  2. docker version change.
  3. VM has already been destroyed by VirtualBox.
Starting local Kubernetes v1.8.0 cluster...
Starting VM...
E1124 22:16:44.080926   87887 start.go:150] Error starting host: Temporary Error: Error configuring auth on host: Temporary Error: ssh command error:

Solution: rm -rf ~/.minikube

[TIL][SMACK] Install and run Kafka in Mac OSX

Why not Kafka 0.11 or 1.0

  • homebrew kafka version using 0.11 and could not launch on my computer, and it is hard to know detail why homebrew/kafka failed. (issue)
  • Transactional Coordinator still not support by sarama golang client (golang) (issue)

Install Kafka 0.8 manually in 2017/11

sudo su - 
cd /tmp 
wget https://archive.apache.org/dist/kafka/
tar -zxvf kafka_2.9.1- -C /usr/local/
cd /usr/local/kafka_2.9.1-

sbt update
sbt package

cd /usr/local
ln -s kafka_2.9.1- kafka

echo "" >> ~/.bash_profile
echo "" >> ~/.bash_profile
echo "# KAFKA" >> ~/.bash_profile
echo "export KAFKA_HOME=/usr/local/kafka" >> ~/.bash_profile
source ~/.bash_profile

echo "export KAFKA=$KAFKA_HOME/bin" >> ~/.bash_profile
echo "export KAFKA_CONFIG=$KAFKA_HOME/config" >> ~/.bash_profile
source ~/.bash_profile

$KAFKA/zookeeper-server-start.sh $KAFKA_CONFIG/zookeeper.properties
$KAFKA/kafka-server-start.sh $KAFKA_CONFIG/server.properties

How to verify your installation?

(in your kafka path)

Create topic

> $KAFKA/bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test`

Verify it

> $KAFKA/in/kafka-topics.sh --list --zookeeper localhost:2181


Or your can use golang client to verify it:



[TIL][Golang] Basic usage of cobra

spf13/cobra is a great package if you want to write your own console app. Even Kubernetes console use cobra to develop it console app.

Create simple CLI app example.

Let’s use kubectl as simple example it support.

  • kubectl get nodes
  • kubectl create -f ...RESOURCE
  • kubectl delete -f ...RESOURCE

Create sub-command using Cobra

Take those command as an example, there are some sub-command as follow:

  • get
  • create
  • delete

Here is how we add that sub-command to your app. (ex: kctl)

  • cobra init in your repo. It will create /cmd and main.go.
  • cobra add get to add sub-command get
    • now, you can try kctl get to get prompt from the console which you already call this sub-command.
  • Repeatedly for create and delete. You will see related help in kctl --help.

Add nested-command using Cobra

Cobra could use console mode to add sub-command, but you need to add nested command manually.

ex: we need add one for kctl get nodes.

  • Add nodes.go in /cmd
  • Add following code as an example.
package cmd

import (


func init() {

var nodesCmd = &cobra.Command{
  Use:   "nodes",
  Short: "Print the nodes info",
  Long:  `All software has versions. This is Hugo's`,
  Run: func(cmd *cobra.Command, args []string) {
    fmt.Println("get nodes is call")
  • The most important you need getCmd.AddCommand(nodesCmd).

[TIL] Effective way for git rebase


這篇文章是講解到關於 git rebase 的部分,講著就提到一些 git 技巧.也提到該如何把不小心 commit 的檔案從 git 記錄內徹底刪除的方法.


Manually auto-rebase

Before start to rebase:

git fetch origin

Pull latest code and start rebase: (ex: rebase to master)

git rebase origin/master

Start rebase

git rebase --continue

… modify code.

git add your_changed_code

Force push because tree different:

git push -f -u origin HEAD
  • -u: for upstream sort term.
  • -f: force (because you rebase your code history)

Interactive rebasing

Rebase with interactive

git rebase -i (interactive) origin/develop

It will entry Select your change and make as squash, pick.

Check all commits.

git logs stat


git reset HEAD~ (rollback last change)
git logs --stat --decorate

Here is detail example how to rebase from evan/test1 to develop

git checkout -t origin/evan/test1
git log --stat --decorate
git fetch origin
git rebase -i origin/develop
vim .gitignore
git add -v .gitignore
git rebase --continue
git status
git submodule update
git log --stat
git push -f origin HEAD

If your rebase target branch (origin/develop) has been rebased.

git rebase {from commit} --onto origin/develop 

List previous

List logs before 33322

git log 33322~

Clean git if you found mis-commit

First step

git rev-list --objects --all | git cat-file --batch-check='%(objecttype) %(objectname) %(objectsize) %(rest)' | awk '/^blob/ {print substr($0,6)}' | sort --numeric-sort --key=2 | gcut --complement --characters=13-40 | gnumfmt --field=2 --to=iec-i --suffix=B --padding=7 --round=nearest

Second step (filter branch)

Have to do this before your filter branch.

  1. Remember to run this before you filter branch.
git clone --mirror
  1. Then, just start to fitler your git branch.
set -e
# git filter-branch --index-filter 'git rm --cached --ignore-unmatch filename' HEAD
git filter-branch --index-filter "git rm -r -f --cached --ignore-unmatch $*" --prune-empty --tag-name-filter cat -- --all
git for-each-ref --format="%(refname)" refs/original/ | xargs -n 1 git update-ref -d
git reflog expire --expire=now --all
git reset --hard
git gc --aggressive --prune=now

[Coursera] Deep Learning Specialization: Neural Networks and Deep Learning (三)

總算完成 deeplearning.ai 第一階段課程 “Neural Networks and Deep Learning”

真的相當有趣的基礎課程,基本上上完了就等於把o’reilly deep learning 的整本書都上完.並且有實際透過 numpy 寫完部分的 DNN 的分類器的作業.


本來就想把 Deep Learning 學一下, 因緣際會下看到這一篇 Coursera 學習心得 試讀了七天,除了提供 Jupyter Notebook 之外,作業也都相當有趣,就開始繼續學了. 目前進度到 Week2 相當推薦有程式設計一點點基礎就可以來學.裡面的數學應該還好. 學習的過程中還可以學會 Python 裡面的 numpy 如何使用,因為裡面主要就是要教導你如何使用 numpy 來兜出 Neural Network .

課程鏈結: 這裡



第四週: Deep Neural Networks


  • Deep Neural Network 的 Layer 數,不包括輸入層.有包括隱藏曾與輸出層.
  • 代表第幾層裡面的個數.
  • (輸入層) 通常也可以表示成






用來決定 的都算是 hyperparameter ,舉凡:

  • Learning rate
  • Hidden layer and hidden Unit
  • Choice of activation function

關於 DNN 的 Layer 與 weight 的 shape

透過這張圖,其實有不少關於 Deep Neural Network 可以談的:

  • 這個 NN 總共有 5 Layer,其中有 4 Layer 是 Hidden Layer.不包括輸入層.
  • 每一層的 Neural 數為:
    • A^0 = 2
    • A^1 = 3
    • A^2 = 5
    • A^3 = 4
    • A^4 = 2
    • A^5 = 1
  • 其中 W 的 shape 個數分別為:
    • W1=(3,2)
    • W2=(5,3)
    • W3=(4,5)
    • W4=(2,4)
    • 推導出來表現方式為


推倒的式子為: Z1 = W1 * X + B1 [3,1] = W1 * X[2,1] + B1 (先假設 B1 = 0) => W1 * [2,1] = [3,1] => W1 –> [3, 2] –> [3, 2] * [2, 1] –> [3, 1] => W1 * X + B1 => B1.shape = W1*B1 = [3, 1]


Init Parameter:
  • Init and
  • could not be zero, because it is hard to moving to balance.
    • shape:
      • W1: (input hidden layer size, input layer size)
      • WW: (output layer size, input hidden layer size)
      • ((l-1)layer dim, l layer dim)
  • suggest using zero for init values.
    • shape:
      • B1: (input hidden layer size, 1)
      • B2: (output layer size, 1)
Init deep parameters:
  • You need init every layer (W, B) as well.
Linear forward:
  • Caculate Z= WX + b
  • Need cache parameter cache = A, W, b
  • Need return Z, cache as wll
Linear Activation forward

use sigmoid or relu to transform your Z to activation (denoe: A).

use A_prev (A from previous layer) to get Z and transform to A.

L model forward
  • Caculate each hidden layer with relu activation function.
  • Use sigmoid as output layer activation
Compute cost
Backward propagation - Linear backward.
L-Model backward
Update parameters