我的VPS服务部署记录

我的 VPS 使用的是 centos 服务器,所以以下操作都是基于 centos 系统。

服务器设置

设置 yum 源:

/etc/yum.repos.d/centos.repo

[base]
name=CentOS-$releasever - Base
baseurl=https://vault.centos.org/7.9.2009/os/$basearch/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
enabled=1

[updates]
name=CentOS-$releasever - Updates
baseurl=https://vault.centos.org/7.9.2009/updates/$basearch/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
enabled=1

[extras]
name=CentOS-$releasever - Extras
baseurl=https://vault.centos.org/7.9.2009/extras/$basearch/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
enabled=1

更新 yum 源:

yum update

安装常用软件:

yum install wget curl git vim jq ntp -y

设置时钟同步:

systemctl start ntpd
systemctl enable ntpd
ntpdate pool.ntp.org

[可选] 设置系统 Swap 交换分区

因为 vps 服务器的运行内存很小,所以这里先设置下 Swap

# 1GB RAM with 2GB Swap
sudo fallocate -l 2G /swapfile && \
sudo dd if=/dev/zero of=/swapfile bs=1024 count=2097152 && \
sudo chmod 600 /swapfile && \
sudo mkswap /swapfile && \
sudo swapon /swapfile && \
echo "/swapfile swap swap defaults 0 0" | sudo tee -a /etc/fstab && \
sudo swapon --show && \
sudo free -h

安装 Nginx

参考 CentOS 7 下 yum 安装和配置 Nginx ,使用 yum 安装:

rpm -ivh http://nginx.org/packages/centos/7/noarch/RPMS/nginx-release-centos-7-0.el7.ngx.noarch.rpm

yum install nginx -y

systemctl enable nginx

打开防火墙端口

yum install firewalld -y

firewall-cmd --zone=public --permanent --add-service=http
firewall-cmd --reload

使用反向代理需要打开网络访问权限

setsebool -P httpd_can_network_connect on

安装并生成证书

域名托管在 CloudFlare,可以参考 文章

curl https://get.acme.sh | sh -s email=[email protected]

export CF_Key="XXXXXXXXXXXXXXXXXX"
export CF_Email="[email protected]"

.acme.sh/acme.sh --issue --server letsencrypt --dns dns_cf -d chensoul.cc -d '*.chensoul.cc'

cp .acme.sh/chensoul.cc_ecc/{chensoul.cc.cer,chensoul.cc.key,fullchain.cer,ca.cer} /etc/nginx/ssl/

.acme.sh/acme.sh --installcert -d chensoul.cc -d *.chensoul.cc --cert-file /etc/nginx/ssl/chensoul.cc.cer --key-file /etc/nginx/ssl/chensoul.cc.key --fullchain-file /etc/nginx/ssl/fullchain.cer --ca-file /etc/nginx/ssl/ca.cer --reloadcmd "sudo nginx -s reload"

Docker 安装和配置

Docker 安装

curl -fsSL https://get.docker.com -o get-docker.sh
sh get-docker.sh

启动 docker:

systemctl enable docker
systemctl start docker

设置 iptables 允许流量转发:

iptables -P FORWARD ACCEPT

安装 Compose

curl -L "https://github.com/docker/compose/releases/download/v2.23.3/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose

设置执行权限:

chmod +x /usr/local/bin/docker-compose

参考 Best Practice: Use a Docker network ,创建一个自定义的网络:

docker network create custom

查看 docker 网络:

docker network ls
NETWORK ID     NAME            DRIVER    SCOPE
68f4aeaa57bd   bridge          bridge    local
6a96b9d8617e   custom          bridge    local
4a8679e35f4d   host            host      local
ba21bef23b04   none            null      local

注意:bridge、host、none 是内部预先创建的网络。

然后,在其他服务的 docker-compose.yml 文件添加:

networks:
  - custom

networks:
  custom:
    external: true

服务部署

Postgres

1、使用 docker-compose 安装

postgres.yaml

services:
  postgres:
    image: postgres:17
    container_name: postgres
    environment:
      POSTGRES_DB: postgres 
      POSTGRES_USER: postgres
      POSTGRES_PASSWORD: postgres
      POSTGRES_HOST_AUTH_METHOD: trust
      PGDATA: /data/postgres
    volumes:
      - /data/postgres:/data/postgres
    ports:
      - "5432:5432"
    healthcheck:
      test: [ 'CMD-SHELL', 'pg_isready -U ${POSTGRES_USER}' ]
      interval: 5s
      timeout: 5s
      retries: 10

    networks:
      - custom

networks:
  custom:
    external: true

2、启动

docker-compose -f postgres.yaml up -d

3、进入容器:

docker exec -it postgres bash

Rsshub

直接通过 Docker 安装运行:

docker run -d --name rsshub -p 1200:1200 diygod/rsshub

配置 nginx :

server {
    listen 80;
    listen [::]:80;
    server_name rsshub.chensoul.cc;

    return 301 https://$host$request_uri;
}

server {
    listen          443 ssl;
    server_name     rsshub.chensoul.cc;

    ssl_certificate      /etc/nginx/ssl/fullchain.cer;
    ssl_certificate_key  /etc/nginx/ssl/chensoul.cc.key;

    ssl_session_cache    shared:SSL:1m;
    ssl_session_timeout  5m;

    ssl_ciphers  HIGH:!aNULL:!MD5;
    ssl_prefer_server_ciphers  on;

    location / {
        proxy_pass http://127.0.0.1:1200;
    }
}

Uptime Kuma

参考 kuma,使用 docker compose 部署,创建 uptime.yaml:

services:
  uptime-kuma:
    image: louislam/uptime-kuma:1
    container_name: uptime-kuma
    volumes:
      - ~/.uptime-kuma:/app/data
    ports:
      - 3001:3001 
    restart: always

启动:

docker-compose -f uptime.yaml up -d

配置 nginx 配置文件 /etc/nginx/conf.d/uptime.conf :

server {
    listen 80;
    listen [::]:80;
    server_name uptime.chensoul.cc;

    return 301 https://$host$request_uri;
}

server {
    listen          443 ssl;
    server_name     uptime.chensoul.cc;

    ssl_certificate      /etc/nginx/ssl/fullchain.cer;
    ssl_certificate_key  /etc/nginx/ssl/chensoul.cc.key;

    ssl_session_cache    shared:SSL:1m;
    ssl_session_timeout  5m;

    ssl_ciphers  HIGH:!aNULL:!MD5;
    ssl_prefer_server_ciphers  on;

    location / {
        proxy_pass http://127.0.0.1:3001;
	      proxy_set_header   X-Real-IP $remote_addr;
        proxy_set_header   X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header   Host $host;

        proxy_http_version 1.1;
        proxy_set_header   Upgrade $http_upgrade;
        proxy_set_header   Connection "upgrade";
    }
}

升级

docker compose -f uptime.yaml down
docker pull louislam/uptime-kuma:1
docker-compose -f uptime.yaml up -d

Umami

1、在 postgres 容器创建 umami 数据库和用户:

docker exec -it postgres psql -d postgres -U postgres

CREATE USER umami WITH PASSWORD 'umami@vps!';
CREATE DATABASE umami owner=umami;
ALTER DATABASE umami OWNER TO umami;
GRANT ALL privileges ON DATABASE umami TO umami;

2、通过 docker-compose 安装,创建 umami.yaml:

services:
  umami:
    image: ghcr.io/umami-software/umami:postgresql-latest
    container_name: umami
    ports:
      - "3000:3000"
    environment:
      DATABASE_URL: postgresql://umami:umami@vps!@postgres:5432/umami
      DATABASE_TYPE: postgres
      HASH_SALT: vps@2023
      TRACKER_SCRIPT_NAME: random-string.js
    networks:
      - custom
    restart: always

networks:
  custom:
    external: true

参考 https://eallion.com/umami/,Umami 的默认跟踪代码是被大多数的广告插件屏蔽的,被屏蔽了你就统计不到访客信息了。如果需要反屏蔽,需要在 docker-compose.yml 文件中添加环境变量:TRACKER_SCRIPT_NAME,如:

    environment:
      TRACKER_SCRIPT_NAME: random-string.js

然后获取到的跟踪代码的 src 会变成:

srcipt.js => random-string.js

启动:

docker-compose -f umami.yaml up -d

3、设置自定义域名

umami.chensoul.cc

4、配置 nginx 配置文件 /etc/nginx/conf.d/umami.conf

server {
    listen 80;
    listen [::]:80;
    server_name umami.chensoul.cc;

    return 301 https://$host$request_uri;
}

server {
    listen          443 ssl;
    server_name     umami.chensoul.cc;

    ssl_certificate      /etc/nginx/ssl/fullchain.cer;
    ssl_certificate_key  /etc/nginx/ssl/chensoul.cc.key;

    ssl_session_cache    shared:SSL:1m;
    ssl_session_timeout  5m;

    ssl_ciphers  HIGH:!aNULL:!MD5;
    ssl_prefer_server_ciphers  on;

    location / {
        proxy_pass http://127.0.0.1:3000;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header REMOTE-HOST $remote_addr;
        add_header X-Cache $upstream_cache_status;
        # 缓存
        add_header Cache-Control no-cache;
        expires 12h;
    }
}

5、添加网站

访问 https://umami.chensoul.cc/,默认用户名和密码为 admin/umami。登陆之后,修改密码,并添加网站。

6、升级

docker compose -f umami.yaml down
docker pull ghcr.io/umami-software/umami:postgres-latest
docker-compose -f umami.yaml up -d

memos

1、在 postgres 容器创建 memos 数据库和用户:

docker exec -it postgres psql -d postgres -U postgres

CREATE USER memos WITH PASSWORD 'memos@vps!';
CREATE DATABASE memos owner=memos;
ALTER DATABASE memos OWNER TO memos;
GRANT ALL privileges ON DATABASE memos TO memos;

2、通过 docker-compose 安装,创建 memos.yaml:

services:
  memos:
    image: neosmemo/memos:0.21.0
    container_name: memos
    environment:
      - MEMOS_DRIVER=postgres
      - MEMOS_DSN=postgresql://memos:memos@vps!@postgres:5432/memos?sslmode=disable
    volumes:
      - ~/.memos/:/var/opt/memos
    ports:
      - 5230:5230
    networks:
      - custom

networks:
  custom:
    external: true

注意:

  • 0.21.0 版本之后,去掉了分享功能
  1. 启动
docker-compose -f memos.yaml up -d
  1. 配置 nginx 配置文件 /etc/nginx/conf.d/memos.conf
server {
    listen 80;
    listen [::]:80;
    server_name memos.chensoul.cc;

    return 301 https://$host$request_uri;
}

server {
    listen          443 ssl;
    server_name     memos.chensoul.cc;

    ssl_certificate      /etc/nginx/ssl/fullchain.cer;
    ssl_certificate_key  /etc/nginx/ssl/chensoul.cc.key;

    ssl_session_cache    shared:SSL:1m;
    ssl_session_timeout  5m;

    ssl_ciphers  HIGH:!aNULL:!MD5;
    ssl_prefer_server_ciphers  on;

    location / {
        proxy_pass http://127.0.0.1:5230;
    }
}
  1. 美化标签
/* 设置 Memos 标签样式 */
span.inline-block.w-auto.text-blue-600.dark\:text-blue-400{
   color: #f3f3f3;
   background-color: #40b76b;
   box-shadow: 0 4px 8px rgba(0, 0, 0, 0.1);
   border-radius: 2px;
   padding: 2px 6px;
   font-size: 15px;
   margin-bottom: 4px;
}

/* 设置同级下不同的标签显示不同的颜色 */
/* 第2个标签 */
span.inline-block.w-auto.text-blue-600.dark\:text-blue-400:nth-child(n+2) {
   background-color: #157cf5;
}
/* 第3个标签 */
span.inline-block.w-auto.text-blue-600.dark\:text-blue-400:nth-child(n+4) {
   background-color: #f298a6;
}

/* 设置 Memos 文本高亮样式 */
mark {
   background-color: #F27579;
   color: #f3f3f3;
   padding-left:2px;
   border-radius: 2px;
}

n8n

1、在 postgres 容器创建 n8n 数据库和用户:

docker exec -it postgres psql -d postgres -U postgres

CREATE USER n8n WITH PASSWORD 'n8n@vps!';
CREATE DATABASE n8n owner=n8n;
ALTER DATABASE n8n OWNER TO n8n;
GRANT ALL privileges ON DATABASE n8n TO n8n;

2、通过 docker-compose 安装,创建 n8n.yaml:

services:
  n8n:
    image: n8nio/n8n
    container_name: n8n
    restart: always
    environment:
      - DB_TYPE=postgresdb
      - DB_POSTGRESDB_HOST=postgres
      - DB_POSTGRESDB_PORT=5432
      - DB_POSTGRESDB_DATABASE=n8n
      - DB_POSTGRESDB_USER=n8n
      - DB_POSTGRESDB_PASSWORD=n8n@vps!
      - TZ="Asia/Shanghai"
      - GENERIC_TIMEZONE="Asia/Shanghai"
      - WEBHOOK_URL=https://n8n.chensoul.cc/
      - N8N_ENFORCE_SETTINGS_FILE_PERMISSIONS=true
    ports:
      - 5678:5678
    networks:
      - custom

networks:
  custom:
    external: true

3、启动

docker-compose -f n8n.yaml up -d

4、配置 nginx 配置文件 /etc/nginx/conf.d/n8n.conf

server {
    listen 80;
    listen [::]:80;
    server_name n8n.chensoul.cc;

    return 301 https://$host$request_uri;
}

server {
    listen          443 ssl;
    server_name     n8n.chensoul.cc;

    ssl_certificate      /etc/nginx/ssl/fullchain.cer;
    ssl_certificate_key  /etc/nginx/ssl/chensoul.cc.key;

    ssl_session_cache    shared:SSL:1m;
    ssl_session_timeout  5m;

    ssl_ciphers  HIGH:!aNULL:!MD5;
    ssl_prefer_server_ciphers  on;

    location / {
      proxy_pass http://127.0.0.1:5678;
      chunked_transfer_encoding off;
      proxy_buffering off;
      proxy_cache off;
      access_log /var/log/nginx/n8n.log combined buffer=128k flush=5s;

      proxy_http_version 1.1;
      proxy_set_header Upgrade $http_upgrade;
      proxy_set_header Connection "Upgrade";
    }
}

这里面的转发配置不对的话,会导致直接访问 5678 端口正常,但是访问 nginx 的话,workflow 会一直处于执行。

5、添加 workflow

参考这篇文章 http://stiles.cc/archives/237/ ,目前我配置了以下 workflows,实现了 github、douban、rss、memos 同步到 Telegram。

workflows 参考:

6、升级

docker compose -f n8n.yaml down
docker pull n8nio/n8n
docker-compose -f n8n.yaml up -d

7、备份

安装 jq

yum install epel-release -y
yum install jq -y

/opt/scripts/backup_n8n.sh

#!/bin/bash

DATE=$(date +%Y%m%d)
BACKUP_DIR="/data/backup/n8n"
mkdir -p $BACKUP_DIR/$DATE

cd $BACKUP_DIR

docker exec -u node -it n8n n8n export:workflow --backup --output=./$DATE
docker cp n8n:/home/node/$DATE ${BACKUP_DIR}

cd ${BACKUP_DIR}/$DATE
for file in `ls`; do
    filename=$(cat "$file" | jq -r '.name')  
    mv "$file" "$filename".json
done

docker exec -u node -it n8n n8n export:credentials --all --output=./credentials.json
docker cp n8n:/home/node/credentials.json ${BACKUP_DIR}

Bark

Bark is an iOS App which allows you to push custom notifications to your iPhone.

https://github.com/Finb/Bark

需要的时候就可以调用 bark api 给 iPhone 推送消息。

  1. 在ios 上安装 bark

  2. 服务端使用 docker 安装

docker run -dt --name bark -p 8090:8080 -v /data/bark:/data finab/bark-server
  1. 在 ios bark app上添加私有服务器 http://x.x.x.x:8090
  2. 测试
curl -X "POST" "http://x.x.x.x:8090/erWZgGQoV9CqGGwHD8AkBT" \
     -H 'Content-Type: application/json; charset=utf-8' \
     -d $'{
  "body": "这是一个测试",
  "title": "Test Title",
  "badge": 1,
  "category": "myNotificationCategory",
  "icon": "https://www.usememos.com/logo-rounded.png",
  "group": "test"
}'
  1. 在 ios bark app 上添加私有服务器:http://x.x.x.x:8090

  2. 配置 nginx 配置文件 /etc/nginx/conf.d/bark.conf

    server {
        listen 80;
        listen [::]:80;
        server_name bark.chensoul.cc;
    
        return 301 https://$host$request_uri;
    }
    
    server {
        listen          443 ssl;
        server_name     bark.chensoul.cc;
    
        ssl_certificate      /etc/nginx/ssl/fullchain.cer;
        ssl_certificate_key  /etc/nginx/ssl/chensoul.cc.key;
    
        ssl_session_cache    shared:SSL:1m;
        ssl_session_timeout  5m;
    
        ssl_ciphers  HIGH:!aNULL:!MD5;
        ssl_prefer_server_ciphers  on;
    
        location / {
          proxy_pass http://127.0.0.1:8090;
        }
    }
    

Planka

The realtime kanban board for workgroups built with React and Redux.

https://github.com/plankanban/planka

替换 Trello 的看板工具。

安装步骤,参考 官方文档

  1. 为了使用提前安装好的 postgres 数据库,修改 docker-compose 文件:
    • 修改端口为 1337
    • 修改 DATABASE_URL
    • 修改 volumes
    • 设置默认登录用户
services:
  planka:
    image: ghcr.io/plankanban/planka:latest
    container_name: planka
    restart: on-failure
    volumes:
      - /data/planka/user-avatars:/app/public/user-avatars
      - /data/planka/project-background-images:/app/public/project-background-images
      - /data/planka/attachments:/app/private/attachments
    ports:
      - 1337:1337
    environment:
      - BASE_URL=http://localhost:1337/
      - DATABASE_URL=postgresql://planka:planka@vps!@postgres/planka
      - SECRET_KEY=notsecretkey

      # - TRUST_PROXY=0
      # - TOKEN_EXPIRES_IN=365 # In days

      # related: https://github.com/knex/knex/issues/2354
      # As knex does not pass query parameters from the connection string we
      # have to use environment variables in order to pass the desired values, e.g.
      # - PGSSLMODE=<value>

      # Configure knex to accept SSL certificates
      # - KNEX_REJECT_UNAUTHORIZED_SSL_CERTIFICATE=false

      # - [email protected] # Do not remove if you want to prevent this user from being edited/deleted
      - DEFAULT_ADMIN_PASSWORD=demo
      - DEFAULT_ADMIN_NAME=Demo Demo
      - DEFAULT_ADMIN_USERNAME=demo

      # - SHOW_DETAILED_AUTH_ERRORS=false # Set to true to show more detailed authentication error messages. It should not be enabled without a rate limiter for security reasons.
      # - ALLOW_ALL_TO_CREATE_PROJECTS=true

      # - S3_ENDPOINT=
      # - S3_REGION=
      # - S3_ACCESS_KEY_ID=
      # - S3_SECRET_ACCESS_KEY=
      # - S3_BUCKET=
      # - S3_FORCE_PATH_STYLE=true

      # - OIDC_ISSUER=
      # - OIDC_CLIENT_ID=
      # - OIDC_CLIENT_SECRET=
      # - OIDC_ID_TOKEN_SIGNED_RESPONSE_ALG=
      # - OIDC_USERINFO_SIGNED_RESPONSE_ALG=
      # - OIDC_SCOPES=openid email profile
      # - OIDC_RESPONSE_MODE=fragment
      # - OIDC_USE_DEFAULT_RESPONSE_MODE=true
      # - OIDC_ADMIN_ROLES=admin
      # - OIDC_CLAIMS_SOURCE=userinfo
      # - OIDC_EMAIL_ATTRIBUTE=email
      # - OIDC_NAME_ATTRIBUTE=name
      # - OIDC_USERNAME_ATTRIBUTE=preferred_username
      # - OIDC_ROLES_ATTRIBUTE=groups
      # - OIDC_IGNORE_USERNAME=true
      # - OIDC_IGNORE_ROLES=true
      # - OIDC_ENFORCED=true

      # Email Notifications (https://nodemailer.com/smtp/)
      # - SMTP_HOST=
      # - SMTP_PORT=587
      # - SMTP_NAME=
      # - SMTP_SECURE=true
      # - SMTP_USER=
      # - SMTP_PASSWORD=
      # - SMTP_FROM="Demo Demo" <[email protected]>
      # - SMTP_TLS_REJECT_UNAUTHORIZED=false

      # Optional fields: accessToken, events, excludedEvents
      # - |
      #   WEBHOOKS=[{
      #     "url": "http://localhost:3001",
      #     "accessToken": "notaccesstoken",
      #     "events": ["cardCreate", "cardUpdate", "cardDelete"],
      #     "excludedEvents": ["notificationCreate", "notificationUpdate"]
      #   }]

      # - SLACK_BOT_TOKEN=
      # - SLACK_CHANNEL_ID=

      # - GOOGLE_CHAT_WEBHOOK_URL=

      # - TELEGRAM_BOT_TOKEN=
      # - TELEGRAM_CHAT_ID=
      # - TELEGRAM_THREAD_ID=
    networks:
      - custom

networks:
  custom:
    external: true
  1. 在 postgres 容器创建 planka 数据库和用户:
docker exec -it postgres psql -d postgres -U postgres

CREATE USER planka WITH PASSWORD 'planka@vps!';
CREATE DATABASE planka owner=planka;
ALTER DATABASE planka OWNER TO planka;
GRANT ALL privileges ON DATABASE planka TO planka;

3、启动

docker-compose -f planka.yaml up -d

4、配置 nginx 配置文件 /etc/nginx/conf.d/planka.conf

server {
    listen 80;
    listen [::]:80;
    server_name planka.chensoul.cc;

    return 301 https://$host$request_uri;
}

server {
    listen          443 ssl;
    server_name     planka.chensoul.cc;

    ssl_certificate      /etc/nginx/ssl/fullchain.cer;
    ssl_certificate_key  /etc/nginx/ssl/chensoul.cc.key;

    ssl_session_cache    shared:SSL:1m;
    ssl_session_timeout  5m;

    ssl_ciphers  HIGH:!aNULL:!MD5;
    ssl_prefer_server_ciphers  on;

    location / {
      proxy_pass http://127.0.0.1:1337;
    }
}

Hoarder

A self-hostable bookmark-everything app with a touch of AI for the data hoarders out there.

一款可自托管的书签应用程序,带有 AI智能功能。

https://github.com/hoarder-app/hoarder

  1. 通过 docker-compose 安装,创建 hoarder.yaml:
version: "3.8"
services:
  web:
    image: ghcr.io/hoarder-app/hoarder:${HOARDER_VERSION:-release}
    container_name: hoarder
    restart: unless-stopped
    volumes:
      - /data/hoarder:/data
    ports:
      - 3002:3000
    environment:
      NEXTAUTH_SECRET: super_random_string
      MEILI_MASTER_KEY: another_random_string
      NEXTAUTH_URL: https://hoarder.chensoul.cc
      MEILI_ADDR: http://meilisearch:7700
      BROWSER_WEB_URL: http://chrome:9222
      # OPENAI_API_KEY: ...
      DATA_DIR: /data
  chrome:
    image: gcr.io/zenika-hub/alpine-chrome:123
    restart: unless-stopped
    command:
      - --no-sandbox
      - --disable-gpu
      - --disable-dev-shm-usage
      - --remote-debugging-address=0.0.0.0
      - --remote-debugging-port=9222
      - --hide-scrollbars
  meilisearch:
    image: getmeili/meilisearch:v1.11.1
    restart: unless-stopped
    environment:
      MEILI_NO_ANALYTICS: "true"
    volumes:
      - meilisearch:/meili_data

volumes:
  meilisearch:
  1. 启动
docker-compose -f hoarder.yaml up -d
  1. 配置 nginx 配置文件 /etc/nginx/conf.d/hoarder.conf
server {
    listen 80;
    listen [::]:80;
    server_name hoarder.chensoul.cc;

    return 301 https://$host$request_uri;
}

server {
    listen          443 ssl;
    server_name     hoarder.chensoul.cc;

    ssl_certificate      /etc/nginx/ssl/fullchain.cer;
    ssl_certificate_key  /etc/nginx/ssl/chensoul.cc.key;

    ssl_session_cache    shared:SSL:1m;
    ssl_session_timeout  5m;

    ssl_ciphers  HIGH:!aNULL:!MD5;
    ssl_prefer_server_ciphers  on;

    location / {
      proxy_pass http://127.0.0.1:3002;
    }
}
  1. 使用 api 接口导入 url

先创建 api-key。

查询 memos 数据库(我使用的是 postgres )中包含 url 链接的记录:

SELECT (regexp_matches(content, 'https?://\S+', 'g'))[1] AS url FROM memo where content like '%https:%'

将查询结果保存为 all_links.txt

然后运行导入命令:

while IFS= read -r url; do
    docker run --rm ghcr.io/hoarder-app/hoarder-cli:release --api-key "ak1_066c79d8a9d634d4b826_d99f3a2732cab381f7c9" --server-addr "https://hoarder.chensoul.cc/" bookmarks add --link "$url"
done < all_links.txt

服务升级

/opt/scripts/upgrade.sh

#!/bin/bash

docker pull ghcr.io/umami-software/umami:postgres-latest
docker pull neosmemo/memos:latest
docker pull postgres:17
docker pull n8nio/n8n


docker-compose -f memos.yaml down
docker-compose -f memos.yaml up -d

docker-compose -f umami.yaml down
docker-compose -f umami.yaml up -d

docker-compose -f n8n.yaml down
docker-compose -f n8n.yaml up -d

数据库备份和恢复

/opt/scripts/backup_db.sh

#!/bin/bash

CONTAINER_NAME="postgres"

BACKUP_DIR="/data/backup/database"

BACKUP_PREFIX=postgres

BACKUP_POSTFIX=$(date +%Y%m%d)

DATABASES=("memos" "n8n" "umami")

mkdir -p $BACKUP_DIR

for DB_NAME in "${DATABASES[@]}"
do
  BACKUP_FILE="${BACKUP_PREFIX}_${DB_NAME}_${BACKUP_POSTFIX}.sql"

  echo "docker exec $CONTAINER_NAME pg_dump -U $DB_NAME -W -F p $DB_NAME > $BACKUP_DIR/$BACKUP_FILE"
  docker exec $CONTAINER_NAME pg_dump -U $DB_NAME -W -F p $DB_NAME > $BACKUP_DIR/$BACKUP_FILE

  if [ $? -eq 0 ]; then
    echo "DB $DB_NAME backup success: $BACKUP_FILE"
  else
    echo "DB $DB_NAME backup fail"
  fi
done

# 删除超过7天的备份文件
find $BACKUP_DIR -name "${BACKUP_PREFIX}_*.sql" -type f -mtime +7 -exec rm -f {} \;

恢复:

docker cp /data/backup/database postgres:/
docker exec -it postgres bash

psql -d memos -U memos
\i /database/postgres_memos_20241230.sql

psql -d umami -U umami
\i /database/postgres_umami_20241230.sql

psql -d n8n -U n8n
\i /database/postgres_n8n_20241230.sql

定时任务

8 9 * * * "/root/.acme.sh"/acme.sh --cron --home "/root/.acme.sh" > /dev/null
*/20 * * * * /usr/sbin/ntpdate pool.ntp.org > /dev/null 2>&1
0 1 * * * sh /opt/scripts/backup_db.sh
0 1 * * * sh /opt/scripts/backup_n8n.sh

参考文章

Share this post:

Related content