配置pve虚拟机

系列 - 打造自己的家庭服务器

这个虚拟机主要是跑常用的 Docker 服务的,比如说家庭影音系统、文件共享服务、自动化脚本等,承担类似于 Nas 的功能,这里用的是ubuntu20 系统。

换高位端口,使用密钥登录,禁止密码登录,禁止root登录,

# ssh目录不存在则先创建
mkdir -p ~/.ssh && chmod 700 ~/.ssh && touch ~/.ssh/authorized_keys && chmod 600 ~/.ssh/authorized_keys

# 编辑文件加入公钥内容
vim ~/.ssh/authorized_keys

# 编辑配置文件使得密钥登录生效
vim /etc/ssh/sshd_config
# 修改下面参数为指定内容
PermitRootLogin no
PubkeyAuthentication yes
AuthorizedKeysFile .ssh/authorized_keys
Port 自定义高位端口
# 刷新ssh服务
service sshd restart
# 密钥登录测试通过后,继续编辑/etc/ssh/sshd_config,关闭密码登录
PasswordAuthentication no
ChallengeResponseAuthentication no
# 刷新ssh服务
service sshd restart
systemctl restart sshd

改时区

sudo timedatectl set-timezone Asia/Shanghai
# 验证
timedatectl

避免sudo时反复输入密码

sudo passwd root	# 先设置下root密码
sudo vim /etc/sudoers
用户名 ALL=(ALL) NOPASSWD:ALL	# 结尾加上该记录,便可让指定用户不用在sudo时输入密码
# 备份安装源
echo "message->set aliyun mirrors"
sudo cp /etc/apt/sources.list /etc/apt/sources.list.bak

# 替换成阿里的安装源
echo "deb https://mirrors.aliyun.com/ubuntu/ focal main restricted universe multiverse
deb-src https://mirrors.aliyun.com/ubuntu/ focal main restricted universe multiverse

deb https://mirrors.aliyun.com/ubuntu/ focal-security main restricted universe multiverse
deb-src https://mirrors.aliyun.com/ubuntu/ focal-security main restricted universe multiverse

deb https://mirrors.aliyun.com/ubuntu/ focal-updates main restricted universe multiverse
deb-src https://mirrors.aliyun.com/ubuntu/ focal-updates main restricted universe multiverse

# deb https://mirrors.aliyun.com/ubuntu/ focal-proposed main restricted universe multiverse
# deb-src https://mirrors.aliyun.com/ubuntu/ focal-proposed main restricted universe multiverse

deb https://mirrors.aliyun.com/ubuntu/ focal-backports main restricted universe multiverse
deb-src https://mirrors.aliyun.com/ubuntu/ focal-backports main restricted universe multiverse" | sudo tee /etc/apt/sources.list

# 更新安装源
sudo apt-get update

# 卸载旧版本docker
echo "message->uninstall old docker"
sudo apt-get remove docker docker-engine docker.io containerd runc

# 更新安装源,并安装https访问相关包
echo "message->start to install docker"
sudo apt-get update
sudo apt-get install -y \
    ca-certificates \
    curl \
    gnupg \
    lsb-release

# 添加Docker官方版本库的GPG密钥
sudo mkdir -m 0755 -p /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg

# 使用以下命令设置存储库
echo \
  "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
  $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

# 更新apt包索引
sudo apt-get update

# 安装最新版本
sudo apt-get install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin

# 设置镜像加速器
echo "message->set docker mirrors"
sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json <<-'EOF'
{
  "registry-mirrors": [
  "http://hub-mirror.c.163.com",
  "https://mirror.ccs.tencentyun.com"
  ]
}
EOF

# 重启docker
echo "message->restart docker"
sudo systemctl daemon-reload
sudo systemctl restart docker

# 使内存限制功能生效
echo "message->handle swap limit problem"
sudo sed -i 's/GRUB_CMDLINE_LINUX=""/GRUB_CMDLINE_LINUX="cgroup_enable=memory swapaccount=1"/g' /etc/default/grub
sudo update-grub
sudo reboot
# 查询已有的image
sudo docker images
# 查看docker应用占用资源(内存、CPU、IO)
sudo docker stats
# 查看已启动的docker应用id
sudo docker ps
# 进入对应docker容器
sudo docker exec -it 容器名或者ID bash
# 停止指定容器
sudo docker stop 容器名或者ID
# 删除指定容器
sudo docker rm 容器名或者ID
# 删除指定镜像
sudo docker rm 镜像ID
# 将所有image导出到一个包中
sudo docker image save -o images.tar $(sudo docker images --format '{{.Repository}}:{{.Tag}}')
# 从包中导入镜像到本地
sudo docker image load -i images.tar
# 删除本地所有镜像
sudo docker rmi $(sudo docker images -q)
# 清理未使用的镜像
sudo docker image prune -af
# 启动docker应用容器,加--compatibility是为了让内存限制配置生效
sudo docker compose --compatibility up -d
# 停止并删除容器
sudo docker compose --compatibility down
# 显示日志,如果有更新会自动浮现,按Ctrl+c退出
sudo docker compose logs -f

为了避免数据泄漏,最好还是给各个服务端口套上一层 https,下面介绍了如何在局域网环境且没有域名的情况下,让 Nas 虚拟机的各个服务能够正常使用 https。

思路其实就是自己签发 CA 证书并用这个证书去认证自己给服务生成的公钥,有以下两种方法

这种方式比较快速简单,去下一个 windows 端的软件,然后执行以下命令就行

# 生成服务器私钥以及经过签名的公钥
mkcert-v1.4.4-windows-amd64.exe -install 服务器地址
# 查看ca公钥路径
mkcert-v1.4.4-windows-amd64.exe -CAROOT

还一种方式是使用 openssl 生成证书,在 linux下进行

首先得修复下 openssl 的问题,虽然ubuntu20 自带 openssl,但是尝试给服务端公钥做 CA 签名时会报错,所以选择卸载了重新装一个,装的是 openssl-3.2.0.tar.gz 版本

# 卸载
sudo apt-get remove openssl
# 安装编译工具
sudo apt install build-essential checkinstall zlib1g-dev
# 开始编译安装
./config --prefix=/usr/local/openssl --openssldir=/usr/local/openssl shared zlib
make
sudo make install

# 更新环境变量,确保可以直接通过命令行访问新安装的 OpenSSL
echo 'export PATH=/usr/local/openssl/bin:$PATH' >> ~/.bashrc
source ~/.bashrc
# 更新动态链接器缓存,特别注意这个lib64,如果是lib那就调整为lib,反正就是看具体的名称是什么
echo '/usr/local/openssl/lib64' | sudo tee -a /etc/ld.so.conf
sudo ldconfig

然后去生成证书

# 通过rsa算法生成2048位长度的秘钥,供CA使用
openssl genrsa -out myCA.key 2048

# 创建一个100年有效期的公钥,供CA使用,输入后需要提交各种机构信息
openssl req -new -x509 -key myCA.key -out myCA.cer -days 36500

# 创建服务对应文件夹,方便取文件
mkdir 服务名 && cd 服务名

# 创建openssl.cnf文件并写入内容,内容比较长,放在后面
vim openssl.cnf

# 通过RSA算法生成长度2048位的秘钥,供服务器使用
openssl genrsa -out server.key 2048

# 创建CA公钥时类似,不过在输入Common Name(CN)最好直接输入服务器的IP地址或者域名
openssl req -config openssl.cnf -new -sha256 -out server.req -key server.key

# 将签名请求文件进行签名最终得到服务器的公钥,有效期为1000天
openssl x509 -req -extfile openssl.cnf -extensions v3_req -in server.req -out server.cer -CAkey ../myCA.key -CA ../myCA.cer -days 1000 -CAcreateserial -CAserial serial -sha256

# 最后补充一个生成emby所需的PKCS#12证书的命令
openssl pkcs12 -export -out server.p12 -inkey server.key -in server.cer -certfile ../myCA.cer
# 或者带密码的,密码留空则是后续手动输入
openssl pkcs12 -export -out server.p12 -inkey server.key -in server.cer -certfile ../myCA.cer -password pass:密码

openssl.cnf 的内容可以从 https://github.com/openssl/openssl/blob/master/apps/openssl.cnf 得到,由 openssl 官方提供,只需要在[ alt_names ]这一块添加虚拟机 IP 就行。

这里留一个 openssl.cnf 的备份吧

#
# OpenSSL example configuration file.
# See doc/man5/config.pod for more info.
#
# This is mostly being used for generation of certificate requests,
# but may be used for auto loading of providers

# Note that you can include other files from the main configuration
# file using the .include directive.
#.include filename

# This definition stops the following lines choking if HOME isn't
# defined.
HOME			= .

# Use this in order to automatically load providers.
openssl_conf = openssl_init

# Comment out the next line to ignore configuration errors
config_diagnostics = 1

# Extra OBJECT IDENTIFIER info:
# oid_file       = $ENV::HOME/.oid
oid_section = new_oids

# To use this configuration file with the "-extfile" option of the
# "openssl x509" utility, name here the section containing the
# X.509v3 extensions to use:
# extensions		=
# (Alternatively, use a configuration file that has only
# X.509v3 extensions in its main [= default] section.)

[ new_oids ]
# We can add new OIDs in here for use by 'ca', 'req' and 'ts'.
# Add a simple OID like this:
# testoid1=1.2.3.4
# Or use config file substitution like this:
# testoid2=${testoid1}.5.6

# Policies used by the TSA examples.
tsa_policy1 = 1.2.3.4.1
tsa_policy2 = 1.2.3.4.5.6
tsa_policy3 = 1.2.3.4.5.7

# For FIPS
# Optionally include a file that is generated by the OpenSSL fipsinstall
# application. This file contains configuration data required by the OpenSSL
# fips provider. It contains a named section e.g. [fips_sect] which is
# referenced from the [provider_sect] below.
# Refer to the OpenSSL security policy for more information.
# .include fipsmodule.cnf

[openssl_init]
providers = provider_sect

# List of providers to load
[provider_sect]
default = default_sect
# The fips section name should match the section name inside the
# included fipsmodule.cnf.
# fips = fips_sect

# If no providers are activated explicitly, the default one is activated implicitly.
# See man 7 OSSL_PROVIDER-default for more details.
#
# If you add a section explicitly activating any other provider(s), you most
# probably need to explicitly activate the default provider, otherwise it
# becomes unavailable in openssl.  As a consequence applications depending on
# OpenSSL may not work correctly which could lead to significant system
# problems including inability to remotely access the system.
[default_sect]
# activate = 1


####################################################################
[ ca ]
default_ca	= CA_default		# The default ca section

####################################################################
[ CA_default ]

dir		= ./demoCA		# Where everything is kept
certs		= $dir/certs		# Where the issued certs are kept
crl_dir		= $dir/crl		# Where the issued crl are kept
database	= $dir/index.txt	# database index file.
#unique_subject	= no			# Set to 'no' to allow creation of
					# several certs with same subject.
new_certs_dir	= $dir/newcerts		# default place for new certs.

certificate	= $dir/cacert.pem 	# The CA certificate
serial		= $dir/serial 		# The current serial number
crlnumber	= $dir/crlnumber	# the current crl number
					# must be commented out to leave a V1 CRL
crl		= $dir/crl.pem 		# The current CRL
private_key	= $dir/private/cakey.pem # The private key

x509_extensions	= usr_cert		# The extensions to add to the cert

# Comment out the following two lines for the "traditional"
# (and highly broken) format.
name_opt 	= ca_default		# Subject Name options
cert_opt 	= ca_default		# Certificate field options

# Extension copying option: use with caution.
# copy_extensions = copy

# Extensions to add to a CRL. Note: Netscape communicator chokes on V2 CRLs
# so this is commented out by default to leave a V1 CRL.
# crlnumber must also be commented out to leave a V1 CRL.
# crl_extensions	= crl_ext

default_days	= 365			# how long to certify for
default_crl_days= 30			# how long before next CRL
default_md	= default		# use public key default MD
preserve	= no			# keep passed DN ordering

# A few difference way of specifying how similar the request should look
# For type CA, the listed attributes must be the same, and the optional
# and supplied fields are just that :-)
policy		= policy_match

# For the CA policy
[ policy_match ]
countryName		= match
stateOrProvinceName	= match
organizationName	= match
organizationalUnitName	= optional
commonName		= supplied
emailAddress		= optional

# For the 'anything' policy
# At this point in time, you must list all acceptable 'object'
# types.
[ policy_anything ]
countryName		= optional
stateOrProvinceName	= optional
localityName		= optional
organizationName	= optional
organizationalUnitName	= optional
commonName		= supplied
emailAddress		= optional

####################################################################
[ req ]
default_bits		= 2048
default_keyfile 	= privkey.pem
distinguished_name	= req_distinguished_name
attributes		= req_attributes
x509_extensions	= v3_ca	# The extensions to add to the self signed cert

# Passwords for private keys if not present they will be prompted for
# input_password = secret
# output_password = secret

# This sets a mask for permitted string types. There are several options.
# default: PrintableString, T61String, BMPString.
# pkix	 : PrintableString, BMPString (PKIX recommendation before 2004)
# utf8only: only UTF8Strings (PKIX recommendation after 2004).
# nombstr : PrintableString, T61String (no BMPStrings or UTF8Strings).
# MASK:XXXX a literal mask value.
# WARNING: ancient versions of Netscape crash on BMPStrings or UTF8Strings.
string_mask = utf8only

# req_extensions = v3_req # The extensions to add to a certificate request

[ req_distinguished_name ]
countryName			= Country Name (2 letter code)
countryName_default		= AU
countryName_min			= 2
countryName_max			= 2

stateOrProvinceName		= State or Province Name (full name)
stateOrProvinceName_default	= Some-State

localityName			= Locality Name (eg, city)

0.organizationName		= Organization Name (eg, company)
0.organizationName_default	= Internet Widgits Pty Ltd

# we can do this but it is not needed normally :-)
#1.organizationName		= Second Organization Name (eg, company)
#1.organizationName_default	= World Wide Web Pty Ltd

organizationalUnitName		= Organizational Unit Name (eg, section)
#organizationalUnitName_default	=

commonName			= Common Name (e.g. server FQDN or YOUR name)
commonName_max			= 64

emailAddress			= Email Address
emailAddress_max		= 64

# SET-ex3			= SET extension number 3

[ req_attributes ]
challengePassword		= A challenge password
challengePassword_min		= 4
challengePassword_max		= 20

unstructuredName		= An optional company name

[ usr_cert ]

# These extensions are added when 'ca' signs a request.

# This goes against PKIX guidelines but some CAs do it and some software
# requires this to avoid interpreting an end user certificate as a CA.

basicConstraints=CA:FALSE

# This is typical in keyUsage for a client certificate.
# keyUsage = nonRepudiation, digitalSignature, keyEncipherment

# PKIX recommendations harmless if included in all certificates.
subjectKeyIdentifier=hash
authorityKeyIdentifier=keyid,issuer

# This stuff is for subjectAltName and issuerAltname.
# Import the email address.
# subjectAltName=email:copy
# An alternative to produce certificates that aren't
# deprecated according to PKIX.
# subjectAltName=email:move

# Copy subject details
# issuerAltName=issuer:copy

# This is required for TSA certificates.
# extendedKeyUsage = critical,timeStamping

[ v3_req ]

# Extensions to add to a certificate request

basicConstraints = CA:FALSE
keyUsage = nonRepudiation, digitalSignature, keyEncipherment

subjectAltName = @alt_names

# 这里是重点,需要将里面配置为最终服务端需要的域名或者IP,可以写多个,例如DNS.X = XXXXXX,IP.X = XXXXXX
[ alt_names ]
# DNS.1 = xunshi.com
# DNS.2 = *.xunshi.com
# IP.1 = 192.168.0.2
# IP.2 = 192.168.0.3

[ v3_ca ]


# Extensions for a typical CA


# PKIX recommendation.

subjectKeyIdentifier=hash

authorityKeyIdentifier=keyid:always,issuer

basicConstraints = critical,CA:true

# Key usage: this is typical for a CA certificate. However since it will
# prevent it being used as an test self-signed certificate it is best
# left out by default.
# keyUsage = cRLSign, keyCertSign

# Include email address in subject alt name: another PKIX recommendation
# subjectAltName=email:copy
# Copy issuer details
# issuerAltName=issuer:copy

# DER hex encoding of an extension: beware experts only!
# obj=DER:02:03
# Where 'obj' is a standard or added object
# You can even override a supported extension:
# basicConstraints= critical, DER:30:03:01:01:FF

[ crl_ext ]

# CRL extensions.
# Only issuerAltName and authorityKeyIdentifier make any sense in a CRL.

# issuerAltName=issuer:copy
authorityKeyIdentifier=keyid:always

[ proxy_cert_ext ]
# These extensions should be added when creating a proxy certificate

# This goes against PKIX guidelines but some CAs do it and some software
# requires this to avoid interpreting an end user certificate as a CA.

basicConstraints=CA:FALSE

# This is typical in keyUsage for a client certificate.
# keyUsage = nonRepudiation, digitalSignature, keyEncipherment

# PKIX recommendations harmless if included in all certificates.
subjectKeyIdentifier=hash
authorityKeyIdentifier=keyid,issuer

# This stuff is for subjectAltName and issuerAltname.
# Import the email address.
# subjectAltName=email:copy
# An alternative to produce certificates that aren't
# deprecated according to PKIX.
# subjectAltName=email:move

# Copy subject details
# issuerAltName=issuer:copy

# This really needs to be in place for it to be a proxy certificate.
proxyCertInfo=critical,language:id-ppl-anyLanguage,pathlen:3,policy:foo

####################################################################
[ tsa ]

default_tsa = tsa_config1	# the default TSA section

[ tsa_config1 ]

# These are used by the TSA reply generation only.
dir		= ./demoCA		# TSA root directory
serial		= $dir/tsaserial	# The current serial number (mandatory)
crypto_device	= builtin		# OpenSSL engine to use for signing
signer_cert	= $dir/tsacert.pem 	# The TSA signing certificate
					# (optional)
certs		= $dir/cacert.pem	# Certificate chain to include in reply
					# (optional)
signer_key	= $dir/private/tsakey.pem # The TSA private key (optional)
signer_digest  = sha256			# Signing digest to use. (Optional)
default_policy	= tsa_policy1		# Policy if request did not specify it
					# (optional)
other_policies	= tsa_policy2, tsa_policy3	# acceptable policies (optional)
digests     = sha1, sha256, sha384, sha512  # Acceptable message digests (mandatory)
accuracy	= secs:1, millisecs:500, microsecs:100	# (optional)
clock_precision_digits  = 0	# number of digits after dot. (optional)
ordering		= yes	# Is ordering defined for timestamps?
				# (optional, default: no)
tsa_name		= yes	# Must the TSA name be included in the reply?
				# (optional, default: no)
ess_cert_id_chain	= no	# Must the ESS cert id chain be included?
				# (optional, default: no)
ess_cert_id_alg		= sha256	# algorithm to compute certificate
				# identifier (optional, default: sha256)

[insta] # CMP using Insta Demo CA
# Message transfer
server = pki.certificate.fi:8700
# proxy = # set this as far as needed, e.g., http://192.168.1.1:8080
# tls_use = 0
path = pkix/

# Server authentication
recipient = "/C=FI/O=Insta Demo/CN=Insta Demo CA" # or set srvcert or issuer
ignore_keyusage = 1 # potentially needed quirk
unprotected_errors = 1 # potentially needed quirk
extracertsout = insta.extracerts.pem

# Client authentication
ref = 3078 # user identification
secret = pass:insta # can be used for both client and server side

# Generic message options
cmd = ir # default operation, can be overridden on cmd line with, e.g., kur

# Certificate enrollment
subject = "/CN=openssl-cmp-test"
newkey = insta.priv.pem
out_trusted = apps/insta.ca.crt # does not include keyUsage digitalSignature
certout = insta.cert.pem

[pbm] # Password-based protection for Insta CA
# Server and client authentication
ref = $insta::ref # 3078
secret = $insta::secret # pass:insta

[signature] # Signature-based protection for Insta CA
# Server authentication
trusted = $insta::out_trusted # apps/insta.ca.crt

# Client authentication
secret = # disable PBM
key = $insta::newkey # insta.priv.pem
cert = $insta::certout # insta.cert.pem

[ir]
cmd = ir

[cr]
cmd = cr

[kur]
# Certificate update
cmd = kur
oldcert = $insta::certout # insta.cert.pem

[rr]
# Certificate revocation
cmd = rr
oldcert = $insta::certout # insta.cert.pem

这里主要用到 nginx

  1. 安装 nginx

    sudo apt-get install nginx
    
  2. 启动 nginx

    sudo service nginx restart
    
  3. 确保 /etc/nginx/ssl 路径存在,并在里面放入之前生成的公钥和私钥

  4. 确保文件 /etc/nginx/nginx.conf 存在以下内容

    include /etc/nginx/conf.d/*.conf;
    
  5. /etc/nginx/conf.d 路径下创建 服务名.conf 文件,内容如下

    server {
        listen 对外暴露的https端口 ssl;
    
        ssl_certificate ssl/server.cer;
        ssl_certificate_key ssl/server.key;
    
        ssl_protocols TLSv1.2 TLSv1.3;
        ssl_ciphers ECDHE-RSA-AES128-GCM-SHA256:ECDHE:ECDH:AES:HIGH:!NULL:!aNULL:!MD5:!ADH:!RC4;
    
        location / {
            proxy_pass http://127.0.0.1:服务端口;
            proxy_set_header HOST $host;
            proxy_set_header X-Forwarded-Proto $scheme;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        }
    }
    
  6. 刷新 nginx 配置,每次添加 服务名.conf 文件后,同样需要再执行一遍

    sudo nginx -s reload
    
  7. 其它常用命令

    # 检查配置文件语法是否有误
    sudo nginx -t
    # 查看错误日志
    sudo view /var/log/nginx/error.log
    

除了用 CA 证书给公钥签名外,必须信任自行生成的 CA 证书,才能够正常使用 Https,下面介绍下如何在 linux 环境下导入所生成的 CA 证书

  1. 安装ca导入工具

    sudo apt install ca-certificates
    
  2. 创建指定目录,已存在则跳过

    sudo mkdir /usr/local/share/ca-certificates
    
  3. 复制自签名证书,注意要调整后缀,否则无法被导入

    sudo cp /ca证书路径/myCA.cer /usr/local/share/ca-certificates/myCA.crt
    
  4. 导入证书

    sudo update-ca-certificates
    

同样单独起一个虚拟机,以 ubuntu20 为例

  1. 自行确定是否要修改下面的端口号。下面配置文件预设的端口号为, DNS 端口号为 12321,tproxy-port 的端口号为 17893,mixed-port 的端口号为 7890,代理前端面板端口为 9090,可以根据自己需要进行搜索替换

  2. 确定服务商提供的配置文件中的 dns 端口,此处假设为 12321,若调整为其它端口,自行搜索替换后续配置文件中相应端口

  3. 准备程序文件。创建 /etc/transparentproxy/ 路径,并在其中放入 config.yaml、Country.mmdb 以及代理程序,另外在根目录下放入前端面板文件夹并重命名为 ui

  4. 准备透明代理规则。同样在 /etc/transparentproxy/ 路径下创建两个文件,均赋予执行权限。一个为 iptables 规则调整脚本,文件名为 aujust-iptables.sh ,内容如下

    #!/bin/bash
    
    # 被标记为 555 的数据包要查表 100
    ip rule add fwmark 555 lookup 100
    # 对于 100 表的流量,都要发到本机
    ip route add local default dev lo table 100
    
    # 创建一个新的链,用于处理外部经过透明代理的流量
    iptables -t mangle -N transparentproxy
    # 排除本地地址以及内网地址
    iptables -t mangle -A transparentproxy -d 127.0.0.0/8 -j RETURN
    iptables -t mangle -A transparentproxy -d 192.168.0.0/16 -j RETURN
    iptables -t mangle -A transparentproxy -d 172.16.0.0/12 -j RETURN
    iptables -t mangle -A transparentproxy -d 10.0.0.0/8 -j RETURN
    # 排除特殊地址
    iptables -t mangle -A transparentproxy -d 0.0.0.0/8 -j RETURN
    iptables -t mangle -A transparentproxy -d 169.254.0.0/16 -j RETURN
    iptables -t mangle -A transparentproxy -d 224.0.0.0/4 -j RETURN
    iptables -t mangle -A transparentproxy -d 240.0.0.0/4 -j RETURN
    # 到此的请求重定向目标端口为17893,并打上标记555
    iptables -t mangle -A transparentproxy -p tcp -j TPROXY --on-port 17893 --tproxy-mark 555
    iptables -t mangle -A transparentproxy -p udp -j TPROXY --on-port 17893 --tproxy-mark 555
    # 转发所有 DNS 查询到 12321 端口
    iptables -t nat -I PREROUTING -p udp --dport 53 -j REDIRECT --to 12321
    # 外部经过透明代理的请求,一律发给新链transparentproxy
    iptables -t mangle -A PREROUTING -j transparentproxy
    
    # 创建一个新的链,用于处理出站流量,让本机请求也走代理
    iptables -t mangle -N transparentproxy_local
    # 排除本地地址以及内网地址
    iptables -t mangle -A transparentproxy_local -d 127.0.0.0/8 -j RETURN
    iptables -t mangle -A transparentproxy_local -d 192.168.0.0/16 -j RETURN
    iptables -t mangle -A transparentproxy_local -d 172.16.0.0/12 -j RETURN
    iptables -t mangle -A transparentproxy_local -d 10.0.0.0/8 -j RETURN
    # 排除特殊地址
    iptables -t mangle -A transparentproxy_local -d 0.0.0.0/8 -j RETURN
    iptables -t mangle -A transparentproxy_local -d 169.254.0.0/16 -j RETURN
    iptables -t mangle -A transparentproxy_local -d 224.0.0.0/4 -j RETURN
    iptables -t mangle -A transparentproxy_local -d 240.0.0.0/4 -j RETURN
    # 排除代理服务发出的数据,解决回环问题
    iptables -t mangle -A transparentproxy_local -m cgroup --path system.slice/transparentproxy.service -j RETURN
    # 打上标记的数据包会重新经过 PREROUTING
    iptables -t mangle -A transparentproxy_local -p tcp -j MARK --set-mark 555
    iptables -t mangle -A transparentproxy_local -p udp -j MARK --set-mark 555
    # 本机的出站请求,一律发给新链transparentproxy_local
    iptables -t mangle -A OUTPUT -j transparentproxy_local
    
    # 伪装请求源地址为透明代理地址,不处理发给局域网设备的请求
    iptables -t nat -I POSTROUTING -o eth0 ! -d 192.168.0.0/16 -j MASQUERADE
    
    exec "$@"
    

    另一个则是规则的清理脚本,文件名为 clean.sh ,内容如下

    #!/usr/bin/env bash
    
    set -ex
    
    ip rule del fwmark 555 lookup 100 || true
    ip route del local default dev lo table 100 || true
    
    iptables -t nat -F
    iptables -t nat -X
    iptables -t mangle -F
    iptables -t mangle -X transparentproxy || true
    iptables -t mangle -X transparentproxy_local || true
    
  5. 为代理创建 systemd 服务,实现开机自启。

    创建 systemd 服务配置文件 sudo vim /etc/systemd/system/transparentproxy.service,注意调整代理服务名为实际内容

    [Unit]
    Description=transparentproxy
    After=network.target
    
    [Service]
    Type=simple
    CapabilityBoundingSet=CAP_NET_ADMIN CAP_NET_BIND_SERVICE CAP_NET_RAW
    AmbientCapabilities=CAP_NET_ADMIN CAP_NET_BIND_SERVICE CAP_NET_RAW
    Restart=always
    
    ExecStartPre=+/usr/bin/bash /etc/transparentproxy/clean.sh
    ExecStart=/etc/transparentproxy/代理软件名 -d /etc/transparentproxy
    ExecStartPost=+/usr/bin/bash /etc/transparentproxy/aujust-iptables.sh
    ExecStopPost=+/usr/bin/bash /etc/transparentproxy/clean.sh
    
    [Install]
    WantedBy=multi-user.target
    

    systemd 服务启停相关操作

    # 设置开机自启动
    sudo systemctl enable transparentproxy
    # 启动
    sudo systemctl start transparentproxy
    # 状态及日志查看
    sudo systemctl status transparentproxy
    sudo journalctl -xe
    # 重启
    sudo systemctl restart transparentproxy
    
  6. 开启流量转发功能

    1. 编辑配置文件 /etc/sysctl.conf 并向其中添加如下内容,保存后执行 sysctl -p

      net.ipv4.ip_forward=1
      
    2. 查看 /proc/sys/net/ipv4/ip_forward 的内容,如果是 1 表示设置成功生效

  7. 定期刷新代理的配置文件

    从服务商获取的配置文件是没法直接用于透明代理的,必须经过一定的调整,所需内容大致如下

    mixed-port: 7890
    tproxy-port: 17893
    external-controller: 0.0.0.0:9090
    external-ui: /ui
    allow-lan: true
    
    dns:
      enable: true
      listen: 0.0.0.0:12321
    

    为减少工作量,编写脚本拉取配置文件并修改相应配置,并设置 crontab 定期执行。脚本示例如下

    #!/bin/sh
    
    temp_file="配置文件临时路径"
    LOG_FILE="日志路径"
    config_file="/etc/transparentproxy/config.yaml"
    
    download_url="订阅地址"
    
    # 下载订阅配置文件
    curl -sL "$download_url" > $temp_file
    
    # 检查临时文件是否包含有效内容
    if [ -s "$temp_file" ]; then
        echo "transparentproxy config download success"
    else
        echo "$(date +%Y/%m/%d\ %H:%M:%S) ERROR: transparentproxy 配置文件下载失败,等待下次处理" >> $LOG_FILE
        exit 1  # 使用非零退出码来指示失败
    fi
    
    # 注释掉不需要的内容
    sed -i 's/^port:/#port:/' $temp_file
    sed -i 's/^socks-port:/#socks-port:/' $temp_file
    sed -i 's/^redir-port:/#redir-port:/' $temp_file
    sed -i 's/^mixed-port:/#mixed-port:/' $temp_file
    sed -i 's/^external-controller:/#external-controller:/' $temp_file
    
    # 第一行插入mixed-port: 7890
    sed -i '1i mixed-port: 7890' $temp_file
    # 第二行插入tproxy-port: 17893
    sed -i '2i tproxy-port: 17893' $temp_file
    # 插入前端配置
    sed -i '3i external-controller: 0.0.0.0:9090' $temp_file
    sed -i '4i external-ui: /ui' $temp_file
    
    # 将allow-lan修改为true
    sed -i 's/allow-lan: false/allow-lan: true/' $temp_file
    
    # 将dns的listen由127.0.0.1:任意端口调整为0.0.0.0:12321
    sed -i 's/listen: 127.0.0.1:[0-9]\+/listen: 0.0.0.0:12321/' $temp_file
    
    # 在external-controller配置后插入external-ui: /ui
    #sed -i '/external-controller: 0.0.0.0:9090/a external-ui: \/ui' $temp_file
    
    echo "$(date +%Y/%m/%d\ %H:%M:%S) INFO: transparentproxy 配置文件更新完毕,并进行重启" >> $LOG_FILE
    
    # 替换配置文件并重启代理服务
    sudo cp $temp_file $config_file && sudo systemctl restart transparentproxy
    

如果自己家里的宽带没有公网 IP,又希望在外面可以正常使用家庭服务器服务,那就要用到内网穿透技术了。这里同样单独起一个虚拟机,以 ubuntu20 为例。

通过这种方式进行内网穿透的话,必须要准备一台云服务器,来进行数据的转发。

  1. 虚拟机上安装 wireguard

    sudo apt install wireguard
    sudo apt install resolvconf
    
  2. 虚拟机上开启 IP 地址转发,不开启则只会处理目标 IP 为本机的数据包,其它 IP 的会丢弃

    sudo vim /etc/sysctl.conf
    # 尾部加入以下命令
    net.ipv4.ip_forward = 1
    # 然后
    sudo sysctl -p /etc/sysctl.conf
    # 输出为1表示转发成功
    sudo cat /proc/sys/net/ipv4/ip_forward
    
  3. 云服务器配置 wireguard,这里建议使用 wg-easy 这个开源项目来安装,这样就可以一键生成配置文件和公钥私钥,不用手动编写调整。另外建议配置下 cloudflare tunnel ,这样登录管理页面就可以加上 https ,保证安全性。

    云服务器上 wg-easy 的 docker-compose.yml 文件如下。其中存在两个局域网网段,一个是用来 wireguard 组网的,另一个则是自己的家庭局域网网段。配置文件中假设 wireguard 组网网段为 192.168.100.0,家庭局域网网段为 192.168.3.0,大家可以对前三组数字(如 192.168.100 )进行搜索替换,快速改为自己的。

    services:
      wireguard-easy:
        image: weejewel/wg-easy:7-nightly
        container_name: wireguard-easy
        environment:
          - WG_HOST=云服务器IP
          - WG_PORT=wireguard连接端口 # 建议设置为高位端口
          - PASSWORD=密码
          - PORT=网页端口
          - WG_DEFAULT_ADDRESS=192.168.100.x # wireguard 组网网段,注意最后为.x而不是.0
          - WG_DEFAULT_DNS=114.114.114.114
          - WG_ALLOWED_IPS=192.168.100.0/24
          - WG_PERSISTENT_KEEPALIVE=25
          - LANG=chs
          - WG_POST_UP=iptables -I FORWARD -s 192.168.100.0/24 -i wg0 -d 192.168.100.0/24 -j ACCEPT;iptables -I FORWARD -s 192.168.100.0/24 -i wg0 -d 192.168.3.0/24 -j ACCEPT;iptables -I FORWARD -s 192.168.3.0/24 -i wg0 -d 192.168.100.0/24 -j ACCEPT;
          - WG_POST_DOWN=iptables -D FORWARD -s 192.168.100.0/24 -i wg0 -d 192.168.100.0/24 -j ACCEPT;iptables -D FORWARD -s 192.168.100.0/24 -i wg0 -d 192.168.3.0/24 -j ACCEPT;iptables -D FORWARD -s 192.168.3.0/24 -i wg0 -d 192.168.100.0/24 -j ACCEPT;
        volumes:
          - ./wireguard:/etc/wireguard
        ports:
          - wireguard连接端口:51820/udp
          - 127.0.0.1:网页端口:网页端口/tcp
        cap_add:
          - NET_ADMIN
          - SYS_MODULE
        sysctls:
          - net.ipv4.conf.all.src_valid_mark=1
          - net.ipv4.ip_forward=1
        restart: unless-stopped
    
  4. 将虚拟机配置成中转服务器,使得其它连入到 wireguard 网络的能够借助这个中转访问到整个家庭局域网,从而访问到家庭服务器。先通过 cloudflare tunnel 登录云服务器的 wg-easy 管理页面,添加一个配置给当前的虚拟机使用。

    添加配置后还要改动下 wireguard/wg0.conf 文件中 [Peer] 处(注意上面的注释要和之前配置的一致)的 AllowedIPs,比如原先是 AllowedIPs = 192.168.100.2/32,改成 AllowedIPs = 192.168.3.0/24,192.168.100.2/32,其实就是加入了家庭局域网网段的地址,使得发往家庭局域网的数据能够正常转发到目标位置。改好后重启下 docker。由于 wg-easy 项目的限制,这种配置虽然能够正常访问家庭局域网,但是以后新添加配置所分配的IP可能有重复,记得在管理页面上手动调整为正确的。

    然后下载添加配置的配置文件,记得要在 [Peer] 前面添加以下两行内容,并注意调整 wireguard 组网网段以及当前虚拟机 IP

    PostUp = iptables -t nat -A POSTROUTING -s 192.168.100.0/24 -j SNAT --to-source 当前虚拟机IP
    PostDown = iptables -t nat -D POSTROUTING -s 192.168.100.0/24 -j SNAT --to-source 当前虚拟机IP
    

    将配置文件放到指定路径进行测试,通过后设置成开启自启服务

    # 启动
    wg-quick up /home/登录用户/software/wireguard/配置文件名.conf
    # ping下192.168.100.1这个地址,看看是否能连通,注意调整为自己的 wireguard 组网网段
    ping 192.168.100.1
    
    # 测试通过后,设置开机自启
    # 停止
    wg-quick down /home/登录用户/software/wireguard/配置文件名.conf
    # 移动文件
    sudo mv /home/登录用户/software/wireguard/配置文件名.conf /etc/wireguard/wg0.conf
    sudo chmod 600 /etc/wireguard/wg0.conf
    # 设置开机自启
    sudo systemctl enable wg-quick@wg0
    sudo systemctl start wg-quick@wg0
    # 查看启动状态
    sudo systemctl status wg-quick@wg0
    
  5. 让其他设备连入 wireguard 网络。同样先通过 cloudflare tunnel 登录云服务器的 wg-easy 管理页面,添加一个配置并下载配置文件,这次不用在 [Peer] 前面添加那两行内容,但是要调整 [Peer]AllowedIPs 的内容,在原有内容基础上添加家庭局域网网段,如 192.168.100.0/24 变为 192.168.100.0/24, 192.168.3.0/24

如果小主机性能够强的话,还可以试着配一个windows环境,做轻量办公娱乐使用。

步骤参考https://pve.proxmox.com/wiki/Windows_10_guest_best_practices这里的就行,注意只适用于64位windows

相关内容