实验拓扑:

1.配置Openstack为Ceph客户端
1、客户端安装
#将openstack节点添加hosts,install安装ceph客户端软件,admin将配置拷贝到节点
[root@ceph-node1 ~]# echo '192.168.1.110 controller' >> /etc/hosts
[root@ceph-node1 ceph]# ceph-deploy install controller
[root@ceph-node1 ceph]# ceph-deploy admin controller
2.配置后端存储池

#分别创建存储池对应glance
ovacinder服务
[root@ceph-node1 ceph]# ceph osd pool create images 128
pool 'images' created
[root@ceph-node1 ceph]# ceph osd pool create vms 128
pool 'vms' created
[root@ceph-node1 ceph]# ceph osd pool create volumes 128
pool 'volumes' created
查看当前存储池

#创建存储用户认证,PS:更新认证使用caps
[root@ceph-node1 ceph]# ceph auth get-or-create client.glance mon 'allow r' osd 'allow class-read object_prefix rbd_children,allow rwx pool=images'
[client.glance.mon]
key = AQAwxx1frBmLLBAAmcelOkoq55BQgb0u34gJfw==
#拷贝密钥到openstack
[root@ceph-node1 ceph]# ceph auth get-or-create client.glance|ssh controller tee /etc/ceph/ceph.client.glance.keyring
root@controller's password:
[client.glance]
key = AQBgxx1fMX9gGRAANQb0KICq1bExhGVQ6qYhkg==
#修改节点的keyring权限
[root@ceph-node1 ceph]# ssh controller chown glance:glance /etc/ceph/ceph.client.glance.keyring
root@controller's password:
2.配置Glance服务
#修改glance-api.conf配置,如下
[root@controller glance]# vim /etc/glance/glance-api.conf
[DEFAULT]
rpc_backend = rabbit
show_image_direct_url = True
[glance_store]
stores = rbd
default_store = rbd
rbd_store_pool = images
rbd_store_user = glance
rbd_store_ceph_conf = /etc/ceph/ceph.conf
rbd_store_chunk_size = 8


#修改完成后重启glance-api服务
[root@controller glance]# openstack-service restart glance-api
测试:
在ceph中启动虚拟机,只支持镜像格式为raw,因此先转换镜像格式为raw
[root@controller images]# qemu-img convert -p -f qcow2 -O raw cirros-0.3.4-x86_64-disk.img cirros.raw
(100.00/100%)
使用glance image-create命令上传镜像

成功上传后,rbd ls images查看镜像池

可以看到存储池中的文件与镜像的ID一样,glance后端存储配置完成

3.配置NOVA服务
提议先配置Cinder服务的对接
Nova compute使用RBD有两种方式:一种是将cinder volume挂载给虚拟机;另一种是从Cinder volume 上启动虚拟机,此时Nova需要创建一个RBDimage,把glance image的内容导入,再交给libvirt。
这边验证第一种Nova使用Ceph的情况:
[root@controller ~]# vim /etc/nova/nova.conf
[libvirt]
virt_type = qemu
inject_key = True
rbd_user = cinder
rbd_secret_uuid = 674fea11-e69e-4c95-b378-2baa19cd6b4e
[root@controller ~]# systemctl restart openstack-nova-compute

测试:
查询网络id,nova boot启动一台cirros镜像的实例,名称为ceph-VM-test

查询之前创建在ceph的云硬盘ID,nova volume-attach挂载云硬盘到ceph-VM-test实例

再次列出cinder硬盘,可以看到卷成功挂载

Dashboard上云主机界面,云主机正常运行

Dashboard上卷界面,云硬盘挂载正常

网络配置正常,ssh连接实例成功

4.配置Cinder服务
#创建cinder认证
[root@ceph-node1 ceph]# ceph auth get-or-create client.cinder mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=volumes,allow rwx pool=vms, allow rx pool=images'
[client.cinder]
key = AQBl0R1fa0KcFBAA3GDkcSyiC+A50ufHyqc6qQ==
#拷贝keyring文件到openstack
[root@ceph-node1 ceph]# ceph auth get-or-create client.cinder|ssh controller tee /etc/ceph/ceph.client.cinder.keyring
root@controller's password:
[client.cinder]
key = AQBl0R1fa0KcFBAA3GDkcSyiC+A50ufHyqc6qQ==
#修改keyring文件权限
[root@ceph-node1 ceph]# ssh controller chown cinder:cinder /etc/ceph/ceph.client.cinder.keyring
root@controller's password:
在Openstack节点上生成UUID,编写定义secret.xml文件,设置密钥给libvirt
#openstack创建uuid
[root@controller ~]# uuidgen
674fea11-e69e-4c95-b378-2baa19cd6b4e
#编写定义secret.xml文件,拷贝UUID
[root@controller ~]# vim secret.xml
<secret ephemeral='no' private='no'>
<uuid>674fea11-e69e-4c95-b378-2baa19cd6b4e</uuid>
<usage type='ceph'>
<name>client.cinder secret</name>
</usage>
</secret>

#define定义密钥文件
[root@controller ~]# virsh secret-define --file secret.xml
生成 secret 674fea11-e69e-4c95-b378-2baa19cd6b4e
#生成保密字符串文件
[root@controller ~]# virsh secret-set-value --secret 674fea11-e69e-4c95-b378-2baa19cd6b4e --base64 $(cat ./client.cinder.key)
secret值设定
查看系统密钥文件

[root@controller ceph]# vim /etc/cinder/cinder.conf
[DEFAULT]
rpc_backend = rabbit
auth_strategy = keystone
my_ip = 192.168.1.110
enabled_backends = ceph
glance_api_servers =http://controller:9292
[ceph]
volume_driver = cinder.volume.drivers.rbd.RBDDriver
rbd_pool = volumes
rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_flatten_volume_from_snapshot = false
rbd_max_clone_depth = 5
rbd_store_chunk_size = 4
rados_connect_timeout = -1
glance_api_version = 2
rbd_user = cinder
rbd_secret_uuid = 674fea11-e69e-4c95-b378-2baa19cd6b4e


[root@controller ceph]# systemctl restart openstack-cinder-volume.service
测试:
cinder创建磁盘ceph-disk

列出cinder磁盘,新建的磁盘状态为available

dashboard界面显示可用

最后到存储池中查看,已经有ID和ceph-disk卷一样的文件。cinder对接ceph完成






















- 最新
- 最热
只看作者