本篇文章介绍使用gunicorn以多进程+多线程的方式来部署flask项目,并使用了nginx做转发。
1.文件结构
如下是我们最终的项目结构:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26
| . ├── apps │ ├── api_v1 │ │ ├── __init__.py │ │ └── user.py │ ├── config.py │ ├── extensions.py │ ├── forms │ │ └── user.py │ ├── __init__.py │ ├── models │ │ ├── __init__.py │ │ └── user.py │ └── utils │ └── response.py ├── data │ ├── migrations │ ├── mysql │ ├── nginx │ │ └── nginx.conf │ └── redis ├── docker-compose.yml ├── Dockerfile ├── requirements.txt ├── gunicorn.py └── wsgi.py
|
2.docker-compose.yml
新增了nginx镜像,将宿主机的8080端口映射到容器的80端口,并挂载了nginx配置文件。
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
| services: web: ... expose: - "5000"
nginx: image: nginx:latest volumes: - ./data/nginx/nginx.conf:/etc/nginx/nginx.conf:ro depends_on: - web ports: - "8080:80" networks: - backend
|
3.data/nginx/nginx.conf
nginx配置将根路由转发到了flask的5000端口:
1 2 3 4 5 6 7 8 9 10 11 12 13
| user nginx;
events { worker_connections 5000; } http { server { listen 80; location / { proxy_pass http://web:5000; } } }
|
4.requirements.txt
1 2
| gunicorn==21.2.0 gevent==23.9.1
|
5.Dockerfile
dockerfile中改用gunicorn启动项目:
1 2
| ENTRYPOINT ["gunicorn", "--config", "gunicorn.py", "wsgi:app"]
|
6.gunicorn.py
gunicorn配置文件如下,其中指定了进程数和线程数,绑定了5000端口。
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
| import os import gevent.monkey
gevent.monkey.patch_all()
logdir = './logs' if not os.path.exists(logdir): os.mkdir(logdir)
loglevel = 'warning' current_path = os.path.dirname(__file__) errorlog = os.path.join(current_path, logdir, 'gunicorn-error.log') accesslog = os.path.join(current_path, logdir, 'gunicorn-access.log') access_log_format = '%(h)s %(l)s %(u)s %(t)s "%(r)s" %(s)s %(b)s "%(f)s" "%(a)s" "%(D)s"'
worker_class = 'gevent' workers = os.cpu_count() * 2 + 1 threads = os.cpu_count() * 2 + 1
timeout = 60 bind = "0.0.0.0:5000"
|
7.测试
使用gunicorn部署后测试,发现并发数差不多有两倍的提升。
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41
| curl -H "Content-Type: application/json" -X POST -d '{"username":"12345678", "password":"12345678", "mobile": "13200001111", "info":{"age": 18, "birth": "2023-08-23 12:30:59"}}' 127.0.0.1:5000/api/v1/users/
wrk -t10 -c1000 -d30s -H "Authorization: Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJmcmVzaCI6ZmFsc2UsImlhdCI6MTY5NzYwOTc0OCwianRpIjoiZDYxYTNmOGMtODNiNy00YTRmLWI5MmYtNWUwMTdkNGY3MDQxIiwidHlwZSI6ImFjY2VzcyIsInN1YiI6MywibmJmIjoxNjk3NjA5NzQ4LCJleHAiOjE2OTc2MDk4Njh9.GG9fawbZEpkrTDmho9y5pc_ZprBpkz8UfuZwsW5lNbU" --latency "http://127.0.0.1:5000/api/v1/users/"
Running 30s test @ http://127.0.0.1:5000/api/v1/users/ 10 threads and 1000 connections Thread Stats Avg Stdev Max +/- Stdev Latency 329.79ms 323.15ms 2.00s 89.33% Req/Sec 70.78 50.50 252.00 61.68% Latency Distribution 50% 220.38ms 75% 234.83ms 90% 1.20s 99% 1.41s 17933 requests in 30.10s, 5.92MB read Socket errors: connect 0, read 0, write 0, timeout 439 Requests/sec: 595.80 Transfer/sec: 201.32KB
curl -H "Content-Type: application/json" -X POST -d '{"username":"12345678", "password":"12345678", "mobile": "13200001111", "info":{"age": 18, "birth": "2023-08-23 12:30:59"}}' 127.0.0.1:8080/api/v1/users/
wrk -t10 -c1000 -d30s -H "Authorization: Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJmcmVzaCI6ZmFsc2UsImlhdCI6MTY5NzYxMDAwNSwianRpIjoiZDA0ZDgwYWQtZjg0Zi00NzIwLWI0ZjUtY2E0NTRkZGMxOGFmIiwidHlwZSI6ImFjY2VzcyIsInN1YiI6NCwibmJmIjoxNjk3NjEwMDA1LCJleHAiOjE2OTc2MTAxMjV9.ZcjKQ6cd9JKAGBonT1FwEyaJCcZqdAP6yUPWCDj9ylc" --latency "http://127.0.0.1:8080/api/v1/users/"
Running 30s test @ http://127.0.0.1:8080/api/v1/users/ 10 threads and 1000 connections Thread Stats Avg Stdev Max +/- Stdev Latency 225.38ms 320.71ms 1.99s 89.78% Req/Sec 112.12 74.17 700.00 76.77% Latency Distribution 50% 117.37ms 75% 120.03ms 90% 1.11s 99% 1.32s 33331 requests in 30.06s, 9.06MB read Socket errors: connect 0, read 0, write 0, timeout 798 Non-2xx or 3xx responses: 67 Requests/sec: 1108.68 Transfer/sec: 308.63KB
|