Hero Image
Tuning EMQX to Scale to One Million Concurrent Connection on Kubernetes

Reference Tuning EMQX to Scale to One Million Concurrent Connection on Kubernetes Performance Tuning (Linux) 矽谷牛的耕田筆記 Note Linux Kernel Tuning node level, basically the non-namespaced sysctls # Sets the maximum number of file handles allowed by the kernel sysctl -w fs.file-max=2097152 # Sets the maximum number of open file descriptors that a process can have sysctl -w fs.nr_open=2097152 namespaced sysctls # Sets the maximum number of connections that can be queued for acceptance by the kernel. sysctl -w net.core.somaxconn=32768 # Sets the maximum number of SYN requests that can be queued by the kernel sysctl -w net.ipv4.tcp_max_syn_backlog=16384 # Setting the minimum, default and maximum size of TCP Buffer sysctl -w net.ipv4.tcp_rmem='1024 4096 16777216' sysctl -w net.ipv4.tcp_wmem='1024 4096 16777216' # Setting Parameters for TCP Connection Tracking sysctl -w net.netfilter.nf_conntrack_tcp_timeout_time_wait=30 # Controls the maximum number of entries in the TCP time-wait bucket table sysctl -w net.ipv4.tcp_max_tw_buckets=1048576 # Controls Timeout for FIN-WAIT-2 Sockets: sysctl -w net.ipv4.tcp_fin_timeout=15 There are some more namespaced sysctls that will improve the performance but because of an active issue we are not able to set them on the container level # Sets the size of the backlog queue for the network device sysctl -w net.core.netdev_max_backlog=16384 # Amount of memory that is allocated for storing incoming and outgoing data for a socket sysctl -w net.core.rmem_default=262144 sysctl -w net.core.wmem_default=262144 # Setting the maximum amount of memory for the socket buffers sysctl -w net.core.rmem_max=16777216 sysctl -w net.core.wmem_max=16777216 sysctl -w net.core.optmem_max=16777216 Erlang VM Tuning ## Erlang Process Limit node.process_limit = 2097152 ## Sets the maximum number of simultaneously existing ports for this system node.max_ports = 2097152 EMQX Broker Tuning # Other configuration… EMQX_LISTENER__TCP__EXTERNAL: "0.0.0.0:1883" EMQX_LISTENER__TCP__EXTERNAL__ACCEPTORS: 64 EMQX_LISTENER__TCP__EXTERNAL__MAX_CONNECTIONS: 1024000

Hero Image
Golang tips

Go 语言是一个高性能的语言,但并不是说这样我们就不用关心性能了,我们还是需要关心的。下面是一个在编程方面和性能相关的提示。 如果需要把数字转字符串,使用 strconv.Itoa() 会比 fmt.Sprintf() 要快一倍左右 尽可能地避免把 String 转成[]Byte 。这个转换会导致性能下降。 如果在 for-loop 里对某个 slice 使用 append()请先把 slice 的容量很扩充到位,这样可以避免内存重新分享以及系统自动按 2 的 N 次方幂进行扩展但又用不到,从而浪费内存。 使用 StringBuffer 或是 StringBuild 来拼接字符串,会比使用 + 或 += 性能高三到四个数量级。 尽可能的使用并发的 go routine,然后使用 sync.WaitGroup 来同步分片操作 避免在热代码中进行内存分配,这样会导致 gc 很忙。尽可能的使用 sync.Pool 来重用对象。 使用 lock-free 的操作,避免使用 mutex,尽可能使用 sync/Atomic 包。 (关于无锁编程的相关话题,可参看《无锁队列实现》或《无锁 Hashmap 实现》) 使用 I/O 缓冲,I/O 是个非常非常慢的操作,使用 bufio.NewWrite() 和 bufio.NewReader() 可以带来更高的性能。 对于在 for-loop 里的固定的正则表达式,一定要使用 regexp.Compile() 编译正则表达式。性能会得升两个数量级。 如果你需要更高性能的协议,你要考虑使用 protobuf 或 msgp 而不是 JSON,因为 JSON 的序列化和反序列化里使用了反射。 你在使用 map 的时候,使用整型的 key 会比字符串的要快,因为整型比较比字符串比较要快。 Reference GO 编程模式:切片,接口,时间和性能

Hero Image
Hero Image
Cloudflare tunnel on Synology

Setup Synology Create a directory in docker directory, such as cloudflare-tunnel. Download cloudflared/cloudflared image to registry. ssh to admin@synology Change cloudflare-tunnel owner, sudo chown -R 65532:65532 /volume1/docker/cloudflare-tunnel. Run containers - cloudflared tunnel login Run container and mount volume docker/cloudflare-tunnel:/home/nonroot/.cloudflared. Select Use the same network as Docker Host in network tab. Add command tunnel login in envorinment tab. Go to container log, and copy login url. Paste url to browser and authorize the zone. Export the container setting json to the directory cloudflare-tunnel. - cloudflared tunnel create synology-tunnel Edit the container setting json in the the directory cloudflare-tunnel, modify cmd. tunnel create synology-tunnel. Import the container setting json and run a new container. The container will stop and create tunnel config json in cloudflare-tunnel. Create config.yml and write ingress rules. In config.yml, tunnel value is the same as the tunnel config json name, and credentials-file is /home/nonroot/.cloudflared/tunnel config json Export the second container setting json to the directory cloudflare-tunnel. - cloudflared tunnel route dns synology-tunnel synology.ruru910.com Edit the second container setting json in the the directory cloudflare-tunnel, modify cmd. tunnel route dns synology-tunnel synology.ruru910.com. Import the second container setting json and run a new container. The container will stop and create a dns record mapping domain to the tunnel. - cloudflared tunnel run synology-tunnel Edit the second container setting json in the the directory cloudflare-tunnel, modify cmd. tunnel run synology-tunnel. Import the second container setting json and run a new container. The tunnel now is connectable. Reference CLOUDFLARE tunnel on SYNOLOGY. (the raw way)

Hero Image
Nginx notes

Record Nginx configuration file and explanation. files structure . ├── geoip.conf ├── nginx.conf ├── sites-available │ ├── default.conf ├── sites-enabled │ ├── default.conf -> ../sites-available/default.conf ├── upstream.conf geoip.conf ## module: ngx_http_geoip2_module ## https://github.com/leev/ngx_http_geoip2_module ## 讀取 GeoIP 資料庫,並進行變數設定 geoip2 /usr/share/GeoIP/GeoLite2-Country.mmdb { auto_reload 60m; $geoip2_metadata_country_build metadata build_epoch; ## 自定義 $geoip2_data_country_code 值為 $remote_addr 對應的 ISO 3116 規範的國碼 $geoip2_data_country_code source=$remote_addr country iso_code; ## 自定義 $geoip2_data_country_name 值為對應的英文城市名 $geoip2_data_country_name country names en; } upstream.conf ## module: ngx_http_upstream_module ## 定義 server 組別 upstream to_nodejs1 { ## server address [parameters]; 定義 server ## parameters: ## weight=number 定義權重,預設為 1 ## max_fails=number 設定到 upstream server 的最大重試次數,預設為 1 ## fail_timeout=time 設定到達 max_fails 次數之後,暫停向此 upstream server 傳送請求的時間,預設為 10 秒 ## backup 標記此 upstream server 為備用,當其他 upstream server 不可用時,此 upstream server 可接受請求 ## down 標記此 upstream server 為不可用 server 10.7.0.12:9000 max_fails=3 fail_timeout=5s; server 10.7.0.12:9001 max_fails=3 fail_timeout=5s backup; } upstream to_nodejs2 { server 10.7.0.12:9002 max_fails=3 fail_timeout=5s; server 10.7.0.12:9003 max_fails=3 fail_timeout=5s backup; } upstream to_nodejs9005 { server 10.7.0.12:9005 max_fails=3 fail_timeout=5s; } ## module: ngx_http_map_module ## map string $variable { ... } 建立一個新的變數 map $arg_agent $game_api { ## $arg_agent 請求中 agent 的值(https://abc.com/?agent=123) ## agent=123, $game_api 的值為 to_nodejs95 123 to_nodejs95; ## agent 結尾是 1, 2, 3, 或是 4, $game_api 的值為 to_nodejs1 ~*1$ to_nodejs1; ~*2$ to_nodejs1; ~*3$ to_nodejs1; ~*4$ to_nodejs1; ## 若 agent 不符合上開規則,預設 $game_api 的值為 to_nodejs2 default to_nodejs2; } default.conf ## module: ngx_http_limit_req_module ## 限制請求處理 ## limit_req_zone key zone=name:size rate=rate [sync]; 定義限制請求的規則 limit_req_zone $binary_remote_addr$server_name zone=websocket:10m rate=1r/m; ## limit_req_status code; 設定被拒絕連線的 HTTP 狀態碼,預設為 503 limit_req_status 502; ## 設定虛擬主機 server { ## listen port [default_server] [ssl] [http2 | spdy] [proxy_protocol] [setfib=number] [fastopen=number] [backlog=number] [rcvbuf=size] [sndbuf=size] [accept_filter=filter] [deferred] [bind] [ipv6only=on|off] [reuseport] [so_keepalive=on|off|[keepidle]:[keepintvl]:[keepcnt]]; ## 設定監聽的埠口,預設為 *:80 ## 下方設定為監聽 80 port,且為預設的虛擬主機 listen 80 default_server; ## server_name name ...; 設定虛擬主機名,可使用正則表示式,預設為 "" server_name _; access_log logs/default/default.log json; error_log logs/default/default.error.log warn; ## module: ngx_http_access_module ## allow address | CIDR | unix: | all; 允許 IP 訪問 allow 1.1.1.1; ## deny address | CIDR | unix: | all; 禁止 IP 訪問 deny 12.34.56.78; ## 設定請求訪問的根資料夾 root /usr/share/nginx/html; ## limit_req zone=name [burst=number] [nodelay | delay=number]; 設定限制請求的規則 zone limit_req zone=websocket nodelay; ## limit_req_log_level info | notice | warn | error; 設定被拒絕連線的請求日誌等級,預設為 error limit_req_log_level warn; ## location [ = | ~ | ~* | ^~ ] uri { ... } ## location @name { ... } 依據請求的 URI 配置 location / { default_type application/json; ## 返回 HTTP 狀態碼 200,並包含字串 return 200 '{"Code": "$status", "IP": "$remote_addr"}'; } } server { ## 下方設定為監聽 443 port,且為預設的虛擬主機,所有連線都使用 SSL listen 443 default_server ssl; server_name _; access_log logs/default/default.log json; error_log logs/default/default.error.log warn; ## module: ngx_http_ssl_module ## 設定 PEM 格式的證書 ssl_certificate /etc/ssl/hddv1.com.crt; ## 設定 PEM 格式的密鑰 ssl_certificate_key /etc/ssl/hddv1.com.key; ## 設定 SSL 版本,預設為 TLSv1 TLSv1.1 TLSv1.2 ssl_protocols TLSv1.2 TLSv1.3; ## 設定啟用的加密方法,預設為 HIGH:!aNULL:!MD5 ssl_ciphers "EECDH+ECDSA+AESGCM:EECDH+aRSA+AESGCM:EECDH+ECDSA+SHA384:EECDH+ECDSA+SHA256:EECDH+aRSA+SHA384:EECDH+aRSA+SHA256:EECDH+aRSA+RC4:EECDH:EDH+aRSA:HIGH:!RC2:!RC4:!aNULL:!eNULL:!LOW:!IDEA:!DES:!TDES:!3DES:!MD5:!EXP:!PSK:!SRP:!DSS:!EXPORT:!ANON"; ## 為 DHE 加密法指定帶有 DH 參數的文件 ssl_dhparam /etc/ssl/dhparams.pem; ## 是否優先使用 server 的加密法,預設為 off ssl_prefer_server_ciphers on; ## ssl_session_cache off | none | [builtin[:size]] [shared:name:size]; ## 設定緩存及大小,預設為 none ssl_session_cache shared:SSL:1m; ## 設定 session 可重複使用的時間,預設為 5 分鐘 ssl_session_timeout 5m; add_header X-Frame-Options "SAMEORIGIN"; add_header X-XSS-Protection "1; mode=block"; add_header Strict-Transport-Security "max-age=31536000; includeSubdomains; preload"; root /usr/share/nginx/html; limit_req zone=websocket nodelay; limit_req_log_level warn; default_type application/json; location / { default_type application/json; return 200 '{"Code": "$status", "IP": "$remote_addr"}'; } } nginx.conf ## module: ngx_core_module ## worker_processes number | auto; 啟動 Nginx worker 程序數量, 設定 auto 即和 CPU 的數量相等 worker_processes auto; ## worker_rlimit_nofile number; Nginx worker 程序最大打開文件數,預設為系統 RLIMIT_NOFILE worker_rlimit_nofile 131072; ## worker_shutdown_timeout time; 設定關閉超時時間,當執行 reload 或是其他相關指令,超過 time 時間之後,Nginx 會主動關閉所有受影響的 worker worker_shutdown_timeout 60; ## error_log file [level]; 設定錯誤日誌寫入位置 ## debug, info, notice, warn, error, crit, alert, emerg error_log logs/error.log warn; ## pid file; 主程序 ID 文件位置 pid logs/nginx.pid; ## module: ngx_core_module ## 設定連線處理相關 events { ## worker_connections number; 單個 Nginx worker 程序的最大並發連接數,預設為 512,需要小於 worker_rlimit_nofile ## 最大連接數 = worker_connections * worker_processes worker_connections 102400; ## accept_mutex on | off; 預設為 off ## 只有一個新連線進入,如果設定為 on,只有一個 worker 會接受連線,其餘持續休眠 ## 如果設定為 off,所有 worker 會被喚醒,只有一個 worker 會接受連線,其餘重新休眠 ## 業務上使用 TCP 長連線、流量大,off 的效能以及 QPS 表現較佳 accept_mutex off; ## multi_accept on | off; 是否同時接受所有的請求,預設為 off multi_accept on; } ## module: ngx_http_core_module ## 設定 HTTP server 相關 http { ## module: ngx_core_module ## include file | mask; 使用文件中的設定 ## 下方為設定 MIME 類型,類型由 mime.type 文件定義 include mime.types; ## default_type mime-type; 定義默認 MIME 類型,預設為 text/plain default_type application/octet-stream; ## server_names_hash_max_size size; 設定 server_name 的 hash 表最大值,預設為 512 kb server_names_hash_max_size 2048; ## 設定 server_name 的 hash 表的大小,用於快速找到對應的 server_name,預設值取決於 CPU 的 L1 cache server_names_hash_bucket_size 256; ## server_tokens on | off | build | string; 是否在 Nginx 錯誤頁面顯示 Nginx 版本,預設為 on server_tokens off; ## 是否在錯誤日誌記錄 404 log_not_found off; ## 是否啟用 sendfile() 提高文件傳輸效率,預設為 off sendfile on; ## 文件是否使用完整封包發送,預設為 off tcp_nopush on; ## 數據是否儘快傳送,預設為 on tcp_nodelay on; ## 設定長連線持續秒數,超過時間 Nginx 會主動關閉連線,預設為 75 keepalive_timeout 70; ## client_max_body_size size; 設定請求允許最大的 body 大小 client_max_body_size 64M; ## module: ngx_http_gzip_module ## 是否啟用 gzip 壓縮,預設為 off gzip on; ## 設定要壓縮的 Content-Length 最小值,預設為 20 gzip_min_length 1k; ## 設定壓縮緩衝大小,預設為一頁記憶體 ## gzip_buffers number size; gzip_buffers 4 32k; ## 設定壓縮等級,範圍 1 ~ 9,預設為 1 gzip_comp_level 7; ## 設定要壓縮的 MIME 類型,預設為 text/html gzip_types text/plain application/x-javascript text/css application/xml text/javascript application/x-httpd-php application/json; ## 是否在 HTTP response header 增加 Vary: Accept-Encoding,預設為 off gzip_vary on; ## 針對特定 User-Agent 禁用壓縮 ## 下方為設定禁用 IE 6 gzip_disable "MSIE [1-6]\."; ## resolver address ... [valid=time] [ipv6=on|off] [status_zone=zone]; 使用指定的 NS 解析 server_name, upstream server 等 resolver 114.114.114.114 8.8.8.8 1.1.1.1; ## module: ngx_http_headers_module ## add_header name value [always]; 在 HTTP response header 增加欄位 ## 下方為設定允許跨域 add_header Access-Control-Allow-Origin *; add_header Access-Control-Allow-Headers DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type; add_header Access-Control-Allow-Methods GET,POST,OPTIONS; add_header Access-Control-Expose-Headers 'WWW-Authenticate,Server-Authorization,User-Identity-Token'; ## module: ngx_http_realip_module ## set_real_ip_from address | CIDR | unix:; 設定信任的可被替代的伺服器 IP,如反向代理伺服器 set_real_ip_from 10.0.0.0/8; set_real_ip_from 172.16.0.0/12; set_real_ip_from 192.168.0.0/16; ## real_ip_header field | X-Real-IP | X-Forwarded-For | proxy_protocol; 定義使用哪個標頭取代獲取到的 client IP,預設為 X-Real-IP real_ip_header X-Forwarded-For; ## 將 real_ip_header 設定的標頭中,「最後一個非信任伺服器 IP」或是「最後一個 IP」當成真實 IP,預設為 off real_ip_recursive on; ## module: ngx_http_log_module ## log_format name [escape=default|json|none] string ...; 設定日誌格式 log_format json escape=json '{"@timestamp":"$time_iso8601",' '"@source":"$server_addr",' '"ip":"$http_x_forwarded_for",' '"client":"$remote_addr",' '"request_method":"$request_method",' '"scheme":"$scheme",' '"domain":"$server_name",' '"client_host":"$host",' '"referer":"$http_referer",' '"request":"$request_uri",' '"args":"$args",' '"sent_bytes":$body_bytes_sent,' '"status":$status,' '"responsetime":$request_time,' '"upstreamtime":"$upstream_response_time",' '"upstreamaddr":"$upstream_addr",' '"http_user_agent":"$http_user_agent",' '"Country":"$geoip2_data_country_name",' '"State":"$geoip2_data_state_name",' '"City":"$geoip2_data_city_name",' '"https":"$https"' '}'; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; ## access_log path [format [buffer=size] [gzip[=level]] [flush=time] [if=condition]]; 設定日誌寫入位置以及使用的日誌名稱 ## access_log off; 不紀錄日誌 access_log logs/access.log json; }

Hero Image
Gitlab-CI Introduction

Gitlab CI Concept Gitlab DevOps GitOps Workflow code push -> pipeline -> stage -> job Design plan -> code -> build -> test -> release -> deploy -> operate -> monitor -> plan Runner Executors Shell VirtualBox Docker Docker Machine Kubernetes Else… References Gitlab CI/CD Gitlab Runner .gitlab-ci.yaml Runner Register gitlab-runner register After register concurrent = 1 check_interval = 0 [session_server] session_timeout = 1800 [[runners]] name = "public-shell" url = "https://gitlab.go2cloudten.com/" token = "-mdH9OAOzG5yPsf_AVnW" executor = "shell" [[runners]] name = "public-docker" url = "https://gitlab.go2cloudten.com/" token = "AcEGPPKTS1uuQ_A_qpWy" executor = "docker" [runners.docker] dns = ["192.168.185.5", "192.168.185.6"] tls_verify = false image = "registry.go2cloudten.com/it/office_sop/node:12.13.0" privileged = true disable_entrypoint_overwrite = false oom_kill_disable = false disable_cache = false shm_size = 0 pull_policy = "if-not-present" volumes = ["/cache"] Repository .gitlab-ci.yaml stages: - domain check-icp: stage: domain image: registry.go2cloudten.com/it/office_sop/icp tags: - docker script: - domains=$(awk -F '|' '{if($6 ~ "Y" && ($7 ~ "West" || $7 ~ "Yuqu")) print $3}' domains-info.md | sed 's/ //g' | sort | uniq) - if [[ "${domains}" == "" ]]; then telegram.sh 'There is no domain in list' ; else telegram.sh 'Start checking ICP.' ; fi - for i in ${domains}; do result=$(checkicp ${i}); if [[ "${result}" == "未备案" ]];then telegram.sh "${i} 未备案"; sleep 1 ;fi;done - telegram.sh 'ICP check completed.' only: - schedules

Hero Image
Docker Introduction

Docker Concept VM vs Container VM - Base on OS Container - Base on Application (Linux Kernel: Namespace and Cgroup) Client to Server Docker daemon - containerd, docker-containerd-shim, docker-runc Docker client - cli command docker cli -> docker daemon -> containerd -> runc -> namespace & cgroup Image Snapshots Container Read-Only processes on image Hub / Registry Store images References Docker —— 從入門到實踐 docker docs Docker commands Dockerfile ARG dist="/tmp/password" ARG projectDir="/password" FROM golang:1.16-alpine3.14 AS builder RUN apk add build-base upx ARG dist ARG projectDir WORKDIR ${projectDir} COPY . . RUN go build -trimpath -o main cmd/main.go RUN upx -9 -o ${dist} main FROM scratch ARG dist ENV TZ=Asia/Taipei COPY --from=builder ${dist} /usr/local/bin/password Dockerfile1 FROM alpine CMD ["nc","-l","12345"] Dockerfile2 FROM alpine CMD ["echo","DOCKER"] docker build command docker build . -t program docker build . -f Dockerfile -t test_mysql docker build . -t hello:v1.1 --build-arg dist=/tmp/hello --build-arg projectDir=/hello docker build . docker/status echo -e "${GREEN}Before build${RESET}" docker image ls docker build . -f docker/Dockerfile1 -t test1 docker build . -f docker/Dockerfile2 -t test2 docker image . docker/status echo -e "${GREEN}After build${RESET}" docker image ls docker run AND rm . docker/status echo -e "${GREEN}Run container1${RESET}" docker run -d --name container1 test1 echo -e "${GREEN}Run container2${RESET}" docker run -d --name container2 test2 echo -e "${GREEN}List alive containers${RESET}" docker ps echo -e "${GREEN}List all containers${RESET}" docker ps -a echo -e "${GREEN}Remove alive container${RESET}" docker rm -f container1 echo -e "${GREEN}List all containers${RESET}" docker ps -a echo -e "${GREEN}Remove exit container${RESET}" docker rm container2 echo -e "${GREEN}List all containers${RESET}" docker ps -a docker pull AND rmi . docker/status echo -e "${GREEN}List all image${RESET}" docker image ls echo -e "${GREEN}Pull alpine image${RESET}" docker pull alpine echo -e "${GREEN}List all image${RESET}" docker image ls docker rmi . docker/status echo -e "${GREEN}Remove alpine image${RESET}" docker rmi alpine echo -e "${GREEN}List all image${RESET}" docker image ls prune docker system prune -f --volumes docker history . docker/status echo -e "${GREEN}History of test1${RESET}" docker history test1 echo -e "${GREEN}History of mysql:8${RESET}" docker history mysql:8 Docker remote Edit service file # /lib/systemd/system/docker.service ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock -H tcp://0.0.0.0:2375 Restart service systemctl daemon-reload systemctl restart docker Specify DOCKER_HOST . docker/status echo -e "${GREEN}List images on 192.168.185.9${RESET}" DOCKER_HOST=192.168.185.9:2375 docker images Docker-compose version: "3" services: svn: image: zeyanlin/svn environment: - LDAP_HOSTS=${LDAP_HOSTS} - LDAP_BASE_DN=${LDAP_BASE_DN} - LDAP_BIND_DN=${LDAP_BIND_DN} - LDAP_ADMIN_PASS=${LDAP_ADMIN_PASS} ports: - 8000:80 - 3690:3690 depends_on: - ldap ldap: image: zeyanlin/openldap environment: - LDAP_DOMAIN=${LDAP_DOMAIN} - LDAP_ADMIN_PASS=${LDAP_ADMIN_PASS} ports: - 389:389 - 636:636 php: image: zeyanlin/phpldapadmin environment: - LDAP_HOSTS=${LDAP_HOSTS} ports: - 80:80 depends_on: - ldap Env LDAP_HOSTS=ldap LDAP_DOMAIN="knowhow.fun" LDAP_BASE_DN="dc=knowhow,dc=fun" LDAP_BIND_DN="cn=admin" LDAP_ADMIN_PASS="123qwe"