2016 年,Google 团队推出了 BBR 用塞控制算法,能尽量跑满带宽。
Linux 社区集成速度很快啊,4.9 内核里面已经将 BBR 集成进去了,但是默认不开启。这么好的东西,为什么不开启呢?
开启 BBR
首先使用 uname -r
确认一下当前内核是不是 4.9+。
写进去两个参数:
echo "net.core.default_qdisc=fq" >> /etc/sysctl.conf
echo "net.ipv4.tcp_congestion_control=bbr" >> /etc/sysctl.conf
保存到系统中:
sysctl -p
查看 BBR是否开启:
sysctl net.ipv4.tcp_available_congestion_control
sysctl net.ipv4.tcp_congestion_control
如果结果都有 BBR, 则证明你的内核已开启 BBR。
执行 lsmod | grep bbr
,看到有 tcp_bbr
模块即说明 BBR 已启动。
后续
当然,如果你想优化网络……唔
编辑/etc/sysctl.conf
:
# max open files
fs.file-max = 51200
# max read buffer
net.core.rmem_max = 67108864
# max write buffer
net.core.wmem_max = 67108864
# default read buffer
net.core.rmem_default = 65536
# default write buffer
net.core.wmem_default = 65536
# max processor input queue
net.core.netdev_max_backlog = 4096
# max backlog
net.core.somaxconn = 4096
# resist SYN flood attacks
net.ipv4.tcp_syncookies = 1
# reuse timewait sockets when safe
net.ipv4.tcp_tw_reuse = 1
# turn off fast timewait sockets recycling
net.ipv4.tcp_tw_recycle = 0
# short FIN timeout
net.ipv4.tcp_fin_timeout = 30
# short keepalive time
net.ipv4.tcp_keepalive_time = 1200
# outbound port range
net.ipv4.ip_local_port_range = 10000 65000
# max SYN backlog
net.ipv4.tcp_max_syn_backlog = 4096
# max timewait sockets held by system simultaneously
net.ipv4.tcp_max_tw_buckets = 5000
# turn on TCP Fast Open on both client and server side
net.ipv4.tcp_fastopen = 3
# TCP receive buffer
net.ipv4.tcp_rmem = 4096 87380 67108864
# TCP write buffer
net.ipv4.tcp_wmem = 4096 65536 67108864
# turn on path MTU discovery
net.ipv4.tcp_mtu_probing = 1
# use bbr
net.ipv4.tcp_congestion_control = bbr
# for high-latency network
#net.ipv4.tcp_congestion_control = hybla
# for low-latency network, use cubic instead
#net.ipv4.tcp_congestion_control = cubic
net.core.default_qdisc = fq
之后是
sysctl -p
如果还想修改文件句柄数限制的话,修改/etc/security/limits.conf
,加入
* soft nofile 512000
* hard nofile 1024000
然后在修改vi /etc/profile
,加入
ulimit -SHn 1024000
发表回复