[TIL][GIT] How to correctly separate subdirectories from git repo

Problem

If you have big repo, you want to separate it into several small repos.

There are two approach you might google it, here is different

Subtree

git subtree split -P feature_a -b "branchA"

It will separate your code, but it will include whole git history. It will slow-down your CI/CD flow if your original repo history is very huge.

Filter-Branch

which rewrites the repo history picking up only those commits that actually affect the content of a specific subdirectory. (refer to refer 2)

git filter-branch --prune-empty --subdirectory-filter SUBDIR -- --all

Reference

[Slides] Consistent Hashing: Algorithmic Tradoffs

這幾天閱讀 Damian Gryski 的 blog post “Consistent Hashing: Algorithmic Tradeoffs” 整理出來的投影片 (第一版)

應該會找時間寫寫中文 blog

Consistent hashing algorithmic tradeoffs from Evan Lin

[GTG30] Introduce vgo

放上這次再 GTG (Golang Taipei Gathering 30)聚會的投影片,主要是介紹可能會放入 go 1.11 的新功能 vgo

GTG30: Introduction vgo from Evan Lin

[Kubernetes] GPU resource names in Kubernetes between Accelerators and DevicePlugin

Two ways to enable GPU in Kubernetes:

If you want to enable GPU resource in Kubernetes and want Kubelet to allocate it. You need config it as following ways:

  • Kubernetes 1.7: Using NVidia container and enable Kuberlet config feature-gates=Accelerators=true
  • Kubernetes 1.9: Using Device Plugin with Kuberlet specific config feature-gates=DevicePlugins=true

Check node if it has GPU resource:

Using kubectl command kubectl get node YOUR_NODE_NAME -o json to export all node info as json format. You should see something like:

## If you use Kubernetes Accelerator after 1.7
  "allocatable": {
        "cpu": "32",
        "memory": "263933300Ki",
        "alpha.kubernetes.io/nvidia-gpu": "4",
        "pods": "110"
    },

Detail defined in k8s.io/api/core/v1/types.go

## If you use Kubernetes Device Plugin after 1.9
  "allocatable": {
        "cpu": "32",
        "memory": "263933300Ki",
        "nvidia.com/gpu": "4",
        "pods": "110"
    }, ## Reference:

[Kubernetes] Use activeDeadlineSeconds to automatically terminate (force stop) your jobs

中文前言:

在使用 Kubernetes 的時候,可以選擇透過 Job 的方式來跑一次性的工作.但是如果希望你的工作在特定時間內一定得結束來釋放資源, 就得透過這個方式.

最近在研究這個的時候,發現有些使用上的小技巧,紀錄一下.

Preface:

If you want to force to terminate your kubernetes jobs if it exceed specific time. (e.g.: run a job no longer than 2 mins).

In this case you can use a watcher to monitor this Kubernetes jobs and terminate it if exceed specific time. Or you can refer K8S Doc:”Job Termination and Cleanup” use activeDeadlineSeconds to force terminare your jobs.

How to use activeDeadlineSeconds:

It is very easy to setup activeDeadlineSeconds in spec.

apiVersion: batch/v1
kind: Job
metadata:
  name: myjob
spec:
  backoffLimit: 5 
  activeDeadlineSeconds: 100
  template:
    spec:
      containers:
      - name: myjob
        image: busybox
        command: ["sleep", "300"]
      restartPolicy: Never

In this example, this job will be terminated after 100 seconds (if it works well :p )

Before you use activeDeadlineSeconds

  • If you ever run a job with activeDeadlineSeconds, you will need delete job before you run the same job again.
  • The job will not stop if you run the same job name with activeDeadlineSeconds
  • You will need change job name to make activeDeadlineSeconds works again. (suggest add specific tag in postfix of job name)

Reference:

[LineBot] 公告: [Line 台北流浪動物需要你] 更名為 [Line 流浪動物需要你] 並且擴大服務為全台灣

由於之前”台北開放資料”中的流浪動物資料忽然無法連上,造成之前寫的 Line Bot: 台北流浪動物需要你” 無法正常運作. 只好去找找政府的資料,”動物認領養” 開放資料. (https://data.gov.tw/dataset/9842#r0)

修改好了,希望大家繼續愛用. 如果有過年需要人陪的,可以玩玩看,年後去領養可愛的小毛球家人.

簡單說明:

  • 打任意字會隨機出現動物
  • 打”狗” 或是 “貓” 會出現相關動物領養資料