由于有个项目需要对象存储,而我手上刚好有几台服务器,因此想乘此机会折腾下对象存储服务。
没抗住压测,甲骨文免费机还是太弱了
网络拓扑
预计的网络拓扑如下所示:
创建步骤
挂载虚拟磁盘
由于我的四台服务器均是单硬盘的服务器,虽然可以直接将minio的文件存入根目录的文件夹中,但为了防止储存的文件超过硬盘的大小导致出错,甚至无法进入服务器,因此我需要先创建一个虚拟磁盘并将其挂载到/data
目录上。
1.创建一个不占空间的大小为40GB的虚拟磁盘
dd if=/dev/zero of=/home/minio.img bs=1M seek=40960 count=0
2.格式化img文件并挂载
mkdir /data
mkfs.ext4 /home/minio.img
mount -o loop /home/minio.img /data
3.使用df -h
确认挂载完成
[root@localhost ~]# df -h
文件系统 容量 已用 可用 已用% 挂载点
devtmpfs 3.8G 0 3.8G 0% /dev
tmpfs 3.9G 0 3.9G 0% /dev/shm
tmpfs 3.9G 12M 3.8G 1% /run
tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup
/dev/mapper/centos-root 44G 1.5G 43G 4% /
/dev/sda1 1014M 193M 822M 20% /boot
tmpfs 781M 0 781M 0% /run/user/0
/dev/loop0 40G 49M 38G 1% /data
/data
目录有40G的空间。
4.开机挂载
编辑/etc/fstab
增加一行:
/home/minio.img /data ext4 defaults 0 0
保存,关闭,重启。
观察是否挂载成功。
使用rm -rf /data/*
删掉自带的文件夹
安装minio
1.关闭防火墙
systemctl stop iptables
systemctl disable iptables
2.修改系统最大文件打开数量限制
echo '* soft noproc 65535
* hard noproc 65535
* soft nofile 65535
* hard nofile 65535'>>/etc/security/limits.conf
3.同步四台服务器的时间,我将其均设置为东八区.
4.组网
四台服务器接入一个局域网内,如果四台服务器不能通过vps服务商的内网进行通信,那么可以用zerotier将其组网后,变成一个伪局域网。
5.试运行
以下命令与我执行的类似,我修改了我的ip和文件夹以及key,我自己执行的代码为了安全就没放在这
Unable to read 'format.json' from http://130.61.*.*:9000/data: Expected 'storage' API version 'v20', instead found 'v20', please upgrade the servers
,这个地方坑了我三小时,最后我自己试了下才知道有这个坑。
cd /home
wget https://dl.min.io/server/minio/release/linux-amd64/minio
chmod +x minio
export MINIO_ACCESS_KEY=<ACCESS_KEY>
export MINIO_SECRET_KEY=<SECRET_KEY>
/home/minio server http://192.168.1.11/data \
http://192.168.1.12/data \
http://192.168.1.13/data \
http://192.168.1.14/data
/data
是我上文创建的/data
,即写成http://192.168.1.11/data
访问任意一个ip,例如http://*.*.*.*:9000
即可进入后台,安装完成。此处可以服务服务器的公网ip.
脚本启动
nano /home/minio.sh
#!/bin/bash
export MINIO_ACCESS_KEY=<ACCESS_KEY>
export MINIO_SECRET_KEY=<SECRET_KEY>
sleep 1m
/home/minio server http://192.168.1.11/data \
http://192.168.1.12/data \
http://192.168.1.13/data \
http://192.168.1.14/data
开机启动
vim /usr/lib/systemd/system/minio.service
# vim /etc/systemd/system/minio.service #ubuntu下的启动脚本
---------------------------------------------------------------------------------------
[Unit]
Description=Minio service
Documentation=https://docs.minio.io/
[Service]
WorkingDirectory=/home/
ExecStart=/home/minio.sh
Restart=on-failure
RestartSec=5
# Specifies the maximum file descriptor number that can be opened by this process
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
--------------------------------------------------------------------------------------
chmod +x /usr/lib/systemd/system/minio.service #centos的启动脚本
# chmod +x /etc/systemd/system/minio.service #ubuntu的启动脚本
#参考minio.service配置: https://github.com/minio/minio-service/blob/master/linux-systemd/minio.service
使用:
chmod +x /home/minio.sh
systemctl daemon-reload #刷新配置
systemctl enable minio
systemctl start minio
systemctl status minio
进行管理。
启动脚本后,访问网站,上传文件测试。
之前的内容是最基础的配置,为了能让群晖对接,使用的s3协议的域名是用储存桶和域名组成的:
By default, MinIO supports path-style requests that are of the format
http://mydomain.com/bucket/object. MINIO_DOMAIN environment variable
is used to enable virtual-host-style requests. If the request Host
header matches with (.+).mydomain.com then the matched pattern $1 is
used as bucket and the path is used as object. More information on
path-style and virtual-host-style here
则使用:
export MINIO_ACCESS_KEY=<ACCESS_KEY>
export MINIO_SECRET_KEY=<SECRET_KEY>
export MINIO_DOMAIN=mydomain.com
sleep 1m
/home/minio server --address 0.0.0.0:443 /data
以及将证书按如下格式存放:
$ mc tree --files ~/.minio
/home/user1/.minio
└─ certs
├─ CAs
├─ private.key
└─ public.crt
来源:官方文档
最后可以使用https://mydomain.com
访问,使用cyberduck、群晖的cloud sync
默认的s3协议连接.而有更好的兼容性。
ftp管理
ftp
在服务器上,使用rclone 挂载 minio后,ftp上传文件到指定文件夹中,来实现同步
先在服务器内安装好bbr,appnode,ftp,rclone
使用mkdir
创建文件夹/data/ra3
,然后将ftp指向这个文件夹,测试连接ftp是否正常.
rclone
使用
curl https://rclone.org/install.sh | sudo bash
安装rclone
安装完成后,运行
rclone config
开始配置
[root@ftp ~]# rclone config
2020/10/06 06:23:33 NOTICE: Config file "/root/.config/rclone/rclone.conf" not found - using defaults
No remotes found - make a new one
n) New remote
s) Set configuration password
q) Quit config
n/s/q> n
name> minio-ra3
Type of storage to configure.
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
1 / 1Fichier
\ "fichier"
2 / Alias for an existing remote
\ "alias"
3 / Amazon Drive
\ "amazon cloud drive"
4 / Amazon S3 Compliant Storage Provider (AWS, Alibaba, Ceph, Digital Ocean, Dreamhost, IBM COS, Minio, Tencent COS, etc)
\ "s3"
5 / Backblaze B2
\ "b2"
6 / Box
\ "box"
7 / Cache a remote
\ "cache"
8 / Citrix Sharefile
\ "sharefile"
9 / Dropbox
\ "dropbox"
10 / Encrypt/Decrypt a remote
\ "crypt"
11 / FTP Connection
\ "ftp"
12 / Google Cloud Storage (this is not Google Drive)
\ "google cloud storage"
13 / Google Drive
\ "drive"
14 / Google Photos
\ "google photos"
15 / Hubic
\ "hubic"
16 / In memory object storage system.
\ "memory"
17 / Jottacloud
\ "jottacloud"
18 / Koofr
\ "koofr"
19 / Local Disk
\ "local"
20 / Mail.ru Cloud
\ "mailru"
21 / Mega
\ "mega"
22 / Microsoft Azure Blob Storage
\ "azureblob"
23 / Microsoft OneDrive
\ "onedrive"
24 / OpenDrive
\ "opendrive"
25 / OpenStack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
\ "swift"
26 / Pcloud
\ "pcloud"
27 / Put.io
\ "putio"
28 / QingCloud Object Storage
\ "qingstor"
29 / SSH/SFTP Connection
\ "sftp"
30 / Sugarsync
\ "sugarsync"
31 / Tardigrade Decentralized Cloud Storage
\ "tardigrade"
32 / Transparently chunk/split large files
\ "chunker"
33 / Union merges the contents of several upstream fs
\ "union"
34 / Webdav
\ "webdav"
35 / Yandex Disk
\ "yandex"
36 / http Connection
\ "http"
37 / premiumize.me
\ "premiumizeme"
38 / seafile
\ "seafile"
Storage> 4
** See help for s3 backend at: https://rclone.org/s3/ **
Choose your S3 provider.
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
1 / Amazon Web Services (AWS) S3
\ "AWS"
2 / Alibaba Cloud Object Storage System (OSS) formerly Aliyun
\ "Alibaba"
3 / Ceph Object Storage
\ "Ceph"
4 / Digital Ocean Spaces
\ "DigitalOcean"
5 / Dreamhost DreamObjects
\ "Dreamhost"
6 / IBM COS S3
\ "IBMCOS"
7 / Minio Object Storage
\ "Minio"
8 / Netease Object Storage (NOS)
\ "Netease"
9 / Scaleway Object Storage
\ "Scaleway"
10 / StackPath Object Storage
\ "StackPath"
11 / Tencent Cloud Object Storage (COS)
\ "TencentCOS"
12 / Wasabi Object Storage
\ "Wasabi"
13 / Any other S3 compatible provider
\ "Other"
provider> 7
Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
Only applies if access_key_id and secret_access_key is blank.
Enter a boolean value (true or false). Press Enter for the default ("false").
Choose a number from below, or type in your own value
1 / Enter AWS credentials in the next step
\ "false"
2 / Get AWS credentials from the environment (env vars or IAM)
\ "true"
env_auth> 1
AWS Access Key ID.
Leave blank for anonymous access or runtime credentials.
Enter a string value. Press Enter for the default ("").
access_key_id> USWUXHGYZQYFYFFIT3RE
AWS Secret Access Key (password)
Leave blank for anonymous access or runtime credentials.
Enter a string value. Press Enter for the default ("").
secret_access_key> MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03F
Region to connect to.
Leave blank if you are using an S3 clone and you don't have a region.
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
1 / Use this if unsure. Will use v4 signatures and an empty region.
\ ""
2 / Use this only if v4 signatures don't work, eg pre Jewel/v10 CEPH.
\ "other-v2-signature"
region> us-east-1
Endpoint for S3 API.
Required when using an S3 clone.
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
endpoint> http://1.1.1.1:9000
Location constraint - must be set to match the Region.
Leave blank if not sure. Used when creating buckets only.
Enter a string value. Press Enter for the default ("").
location_constraint>
Canned ACL used when creating buckets and storing or copying objects.
This ACL is used for creating objects and if bucket_acl isn't set, for creating buckets too.
For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
Note that this ACL is applied when server side copying objects as S3
doesn't copy the ACL from the source but rather writes a fresh one.
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
1 / Owner gets FULL_CONTROL. No one else has access rights (default).
\ "private"
2 / Owner gets FULL_CONTROL. The AllUsers group gets READ access.
\ "public-read"
/ Owner gets FULL_CONTROL. The AllUsers group gets READ and WRITE access.
3 | Granting this on a bucket is generally not recommended.
\ "public-read-write"
4 / Owner gets FULL_CONTROL. The AuthenticatedUsers group gets READ access.
\ "authenticated-read"
/ Object owner gets FULL_CONTROL. Bucket owner gets READ access.
5 | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.
\ "bucket-owner-read"
/ Both the object owner and the bucket owner get FULL_CONTROL over the object.
6 | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.
\ "bucket-owner-full-control"
acl> 1
The server-side encryption algorithm used when storing this object in S3.
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
1 / None
\ ""
2 / AES256
\ "AES256"
3 / aws:kms
\ "aws:kms"
server_side_encryption> 1
If using KMS ID you must provide the ARN of Key.
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
1 / None
\ ""
2 / arn:aws:kms:*
\ "arn:aws:kms:us-east-1:*"
sse_kms_key_id> 1
Edit advanced config? (y/n)
y) Yes
n) No (default)
y/n> n
Remote config
--------------------
[minio-ra3]
type = s3
provider = Minio
env_auth = false
access_key_id = USWUXHGYZQYFYFFIT3RE
secret_access_key = MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03F
region = us-east-1
endpoint = http://1.1.1.1:9000
acl = private
--------------------
y) Yes this is OK (default)
e) Edit this remote
d) Delete this remote
y/e/d> y
Current remotes:
Name Type
==== ====
minio-ra3 s3
e) Edit existing remote
n) New remote
d) Delete remote
r) Rename remote
c) Copy remote
s) Set configuration password
q) Quit config
e/n/d/r/c/s/q> q
使用:
[root@ftp ~]# rclone lsd minio-ra3:
-1 2020-10-06 06:01:15 -1 ra3
确认连接正常.
rclone挂载
rclone mount minio-ra3:ra3 /data/ra3 --copy-links --allow-non-empty --daemon
解释:
minio-ra3是rclone项目名称
ra3是minio的储存桶
/data/ra3是创建的ftp目录
开机挂载
vi /etc/systemd/system/rclone-minio-ra3.service
[Unit]
Description=Rclone
After=network-online.target
[Service]
Type=simple
ExecStart=/usr/bin/rclone mount minio-ra3:ra3 /data/ra3 --copy-links --no-gzip-encoding --no-check-certificate --allow-other --allow-non-empty --umask 000
Restart=on-abort
User=root
[Install]
WantedBy=default.target
相关命令:
自启:systemctl enable rclone-minio-ra3.service
启动:systemctl start rclone-minio-ra3.service
停止:systemctl stop rclone-minio-ra3.service
禁用:systemctl disable rclone-minio-ra3.service
对接nginx
由于需要给国内提供服务,因此使用甲骨文韩国服务器进行反代。
首先设置储存桶为公共。
然后可以通过直链例如:http://1.1.1.1:9000/ra3/1
访问
则可以通过反向代理来进行加速。
参考:
Linux创建、挂载、格式化虚拟磁盘
mount命令及/etc/fstab文件详解
minio搭建单机/集群
Linux系统最大文件打开数优化
Rclone与MinIO服务器
rclone mount
在Debian/Ubuntu上使用rclone挂载Google Drive网盘
将 Rclone 安装到树莓派
版权属于:寒夜方舟
本文链接:https://www.wnark.com/archives/136.html
本站所有原创文章采用署名-非商业性使用 4.0 国际 (CC BY-NC 4.0)。 您可以自由地转载和修改,但请注明引用文章来源和不可用于商业目的。声明:本博客完全禁止任何商业类网站转载,包括但不限于CSDN,51CTO,百度文库,360DOC,AcFun,哔哩哔哩等网站。