文件传输使用FastDFS 很方便, 不管是大小文件, 用默认的配置就可以, 这里插入一个配置文件 :
# connect timeout in seconds
# default value is 30s
connect_timeout=300
# network timeout in seconds
# default value is 30s
network_timeout=300
# the base path to store log files
base_path=/fastdfs/tracker
# tracker_server can ocur more than once, and tracker_server format is
# \"host:port\", host can be hostname or ip address
#tracker_server=10.20.10.191:22122
tracker_server=10.20.1.50:22122
#standard log level as syslog, case insensitive, value list:
### emerg for emergency
### alert
### crit for critical
### error
### warn for warning
### notice
### info
### debug
log_level=info
# if use connection pool
# default value is false
# since V4.05
use_connection_pool = false
# connections whose the idle time exceeds this time will be closed
# unit: second
# default value is 3600
# since V4.05
connection_pool_max_idle_time = 3600
# if load FastDFS parameters from tracker server
# since V4.05
# default value is false
load_fdfs_parameters_from_tracker=false
# if use storage ID instead of IP address
# same as tracker.conf
# valid only when load_fdfs_parameters_from_tracker is false
# default value is false
# since V4.05
use_storage_id = false
# specify storage ids filename, can use relative or absolute path
# same as tracker.conf
# valid only when load_fdfs_parameters_from_tracker is false
# since V4.05
storage_ids_filename = storage_ids.conf
#HTTP settings
http.tracker_server_port=80
#use \"#include\" directive to include HTTP other settiongs
##include http.conf
python客户端进行操作的时候, 目前发现两个bug:
找到自己使用的python解释器命令 (which python) whereis python是显示所有安装的python 解释器
1. 第一个bug:
site-packages/fdfs_client/connection.py 文件 104行
进程池中, 重新初始化连接的时候明显缺少一个参数, 如果是单进程, 源代码不会报错, 因为不会进行子进程id对比, 就不会触发销毁和初始化连接
2 第二个bug:
site-packages/fdfs_client/client.py 文件324行
删除FastDFS文件, 如果参数remote_file_id是str类型, 会导致下面代码tmp = split_remote_fileid(remote_file_id) 报错, 即文件utils.py 第222行
index = remote_file_id.find(b\'/\')
如果remote_file_id是str, 明显报错, #############可能是python解释器版本问题, 我没有进一步验证