window安装dockerdocker-compose

安装前配置

打开控制面板,参照下图打开“启动或关闭windows功能”,Hyper-V 和容器需要启用

程序和功能

启动或关闭windows功能

勾选Hyper-V 

安装路径配置

Docker在Windows上的默认安装路径为C:Program FilesDocker。

以管理员身份运行CMD
在D盘,dev文件夹下创建Docker文件夹。

复制以下指令:

mklink /j “C:Program FilesDocker” “D:Docker”

安装Docker Desktop

 

安装成功后重启电脑

设置国内镜像

自定义镜像,进入设置页点击Docker Engine选项

内容全部替换为

{

“registry-mirrors”: [

        “https://docker.m.daocloud.io”,

        “https://dockerproxy.com”,

        “https://docker.mirrors.ustc.edu.cn”,

        “https://docker.nju.edu.cn”,

“https://zfrla42w.mirror.aliyuncs.com”,      “https://ds56c2e4.mirror.aliyuncs.com”,

        “https://registry.docker-cn.com”,

        “http://hub-mirror.c.163.com”,

        “https://mirror.baidubce.com”,

        “https://docker.mirrors.ustc.edu.cn”,

        “https://cr.console.aliyun.com”,

        “https://mirror.ccs.tencentyun.com”

    ],

    “builder”: {

        “gc”: {

            “defaultKeepStorage”: “20GB”,

            “enabled”: true

        }

    },

    “experimental”: false

}

创建docker-compose.yml文件

选择自己需要的放置的目录,创建文件docker-compose.yml

 docker-compose.yml文件上方的目录根据docker-compose.yml文件内容volumes下的目录自行创建

docker-compose.yml文件内容

version: '3'

services:

  postgres:

    image: postgres:9.2.9

    container_name: postgres

    restart: always

    environment:

        POSTGRES_USER: 自定义账号

        POSTGRES_PASSWORD: 自定义密码

    ports:

        – “5432:5432”

    volumes:

        – E:/docker/postgresql/data:/var/lib/postgresql/data

  sqlserver:

    image: mcr.microsoft.com/mssql/server:2019-latest

    container_name: sql-server2019

    environment:

        ACCEPT_EULA: “Y”

        MSSQL_COLLATION: “Chinese_PRC_CI_AS”

        SA_PASSWORD: “自定义密码”

        TZ: “Asia/Shanghai”

    ports:

        – “1433:1433”

    volumes:

        – E:/docker/sqlserver/data:/var/opt/mssql

  mysql:

    image: mysql:8.0.33

    container_name: mysql

    environment:

      # 时区上海

      TZ: Asia/Shanghai

      # root 密码

      MYSQL_ROOT_PASSWORD: 自定义账号

      # 初始化数据库(后续的初始化sql会在这个库执行)

      MYSQL_DATABASE: 自定义账号

    ports:

      – “3306:3306”

    volumes:

      # 数据挂载

      – E:/docker/mysql/data/:/var/lib/mysql/

      # 配置挂载

      – E:/docker/mysql/conf/:/etc/mysql/conf.d/

    command:

      # 将mysql8.0默认密码策略 修改为 原先 策略 (mysql8.0对其默认策略做了更改 会导致密码无法匹配)

      –default-authentication-plugin=mysql_native_password

      –character-set-server=utf8mb4

      –collation-server=utf8mb4_general_ci

      –explicit_defaults_for_timestamp=true

      –lower_case_table_names=1

    privileged: true

    network_mode: “host”

  redis:

    image: redis:6.2.12

    container_name: redis

    ports:

      – “6379:6379”

    environment:

      # 时区上海

      TZ: Asia/Shanghai

    volumes:

      # 配置文件

      – E:/docker/redis/conf:/redis/config:rw

      # 数据文件

      – E:/docker/redis/data/:/redis/data/:rw

    command: “redis-server /redis/config/redis.conf”

    privileged: true

    network_mode: “host”

  rabbitmq:

    image: rabbitmq:management

    container_name: rabbitmq

    ports:

      – “5672:5672”

      – “15672:15672”

    environment:

      # 设置时区为上海

      TZ: Asia/Shanghai

      # 设置默认用户名和密码

      RABBITMQ_DEFAULT_USER: rdsplm

      RABBITMQ_DEFAULT_PASS: RDS2024

    volumes:

      # 存储配置文件

      – E:/docker/rabbitmq/conf:/etc/rabbitmq

      # 存储数据文件

      – E:/docker/rabbitmq/data:/var/lib/rabbitmq

  onlyoffice:

    image: onlyoffice/documentserver

    container_name: onlyoffice

    ports:

      – “80:80”

    environment:

      TZ: Asia/Shanghai

    volumes:

      – E:/docker/onlyoffice/data:/var/www/onlyoffice/Data

      – E:/docker/onlyoffice/logs:/var/log/onlyoffice

      – E:/docker/onlyoffice/lib:/var/lib/onlyoffice

    privileged: true  

  kibana:

    image: docker.elastic.co/kibana/kibana:7.12.0

    container_name: kibana

    ports:

      – “5601:5601”

    environment:

      ELASTICSEARCH_HOSTS: http://elasticsearch:9200

    depends_on:

      – elasticsearch

  elasticsearch:

    # 使用elasticsearch:7.5.1镜像

    image: docker.elastic.co/elasticsearch/elasticsearch:7.12.0

    container_name: elasticsearch

    # 设置环境变量:集群名称为elasticsearch,以确保节点互相发现

    environment:

      cluster.name: elasticsearch

      # 使用单节点发现模式

      discovery.type: single-node

      ES_JAVA_OPTS: “-Xms64m -Xmx251m”

    # 将9200端口映射到主机端口

    ports:

      – “9200:9200”

      – “9300:9300”

    # 挂载elasticsearch数据目录

    volumes:

      – E:/docker/elasticsearch/data:/usr/share/elasticsearch/data

      – E:/docker/elasticsearch/plugins:/usr/share/elasticsearch/plugins

      – E:/docker/elasticsearch/logs:/usr/share/elasticsearch/logs

  minio:

    image: minio/minio:RELEASE.2023-04-13T03-08-07Z

    container_name: minio

    ports:

      # api 端口

      – “9000:9000”

      # 控制台端口

      – “9001:9001”

    environment:

      # 时区上海

      TZ: Asia/Shanghai

      # 管理后台用户名

      MINIO_ROOT_USER: root

      # 管理后台密码,最小8个字符

      MINIO_ROOT_PASSWORD: RDS2024

      # https需要指定域名

      #MINIO_SERVER_URL: “https://xxx.com:9000”

      #MINIO_BROWSER_REDIRECT_URL: “https://xxx.com:9001”

      # 开启压缩 on 开启 off 关闭

      MINIO_COMPRESS: “off”

      # 扩展名 .pdf,.doc 为空 所有类型均压缩

      MINIO_COMPRESS_EXTENSIONS: “”

      # mime 类型 application/pdf 为空 所有类型均压缩

      MINIO_COMPRESS_MIME_TYPES: “”

    volumes:

      # 映射当前目录下的data目录至容器内/data目录

      – E:/docker/minio/data:/data

      # 映射配置目录

     – E:/docker/minio/config:/root/.minio/

    command: server –address ':9000' –console-address ':9001' /data  # 指定容器中的目录 /data

    privileged: true

启动命令

在docker-compose.yml所在的目录进入cmd操作命令,点击回车

#启动所有服务

docker-compose up -d

#启动单个服务

docker-compose up -d  mysql redis

#关闭所有服务

#docker-compose down

启动minio

启动命令:docker-compose up -d minio

 Docker Desktop会显示正在运行的容器

输入网址即可访问

常见问题

①rabbitmq

访问http://localhost:15672/没反应

#查看控制台插件是否开启

docker exec -it rabbitmq1 bash

# 查看打开的插件

rabbitmq-plugins list

# 启动控制台插件

rabbitmq-plugins enable rabbitmq_management

# 查看打开的插件

rabbitmq-plugins list

#完成后退出

exit

© 版权声明
THE END
如果内容对您有所帮助,就支持一下吧!
点赞0 分享
评论 抢沙发

请登录后发表评论

    暂无评论内容