<返回更多

Ceph分布式存储安装部署过程

2019-12-27    
加入收藏

一、环境准备

1、架构

官方文档:http://docs.ceph.org.cn/start/quick-start-preflight/

Ceph分布式存储安装部署过程

 

2、创建ceph.repo

[root@admin-node yum.repos.d]# cat ceph.repo

[ceph]

name=ceph

baseurl=http://mirrors.aliyun.com/ceph/rpm-jewel/el7/x86_64/

gpgcheck=0

priority=1

[ceph-noarch]

name=cephnoarch

baseurl=http://mirrors.aliyun.com/ceph/rpm-jewel/el7/noarch/

gpgcheck=0

priority=1

[root@admin-node yum.repos.d]#

3、在管理节点安装ceph部署工具

yum install ceph-deploy

4、配置NTP时间同步

yum install chrony -y && systemctl start chronyd && systemctl status chronyd && systemctl enable chronyd && egrep -v "#|^$" /etc/chrony.conf

[root@osd-node2 ~]# egrep -v "#|^$" /etc/chrony.conf

server 10.100.50.120 iburst

driftfile /var/lib/chrony/drift

makestep 1.0 3

rtcsync

logdir /var/log/chrony

5、创建各个节点上的ceph账户

我们建议在集群内的所有 Ceph 节点上给 ceph-deploy 创建一个特定的用户,但不要用 “ceph” 这个名字。全集群统一的用户名可简化操作(非必需),然而你应该避免使用知名用户名,因 为黑客们会用它做暴力破解(如 root 、 admin 、 {productname} )。后续步骤描述了如何创建无 sudo 密码的用户,你要用自己取的名字替换 {username} 。

useradd -d /home/{username} -m {username}

passwd {username}

6、配置SSH无密钥登录

1、生成密钥:不要使用soudu或者root用户

[sysadmin@admin-node ~]$ ssh-keygen -t rsa 使用-t 选择密钥加密类型

Generating public/private rsa key pair.

Enter file in which to save the key (/home/sysadmin/.ssh/id_rsa): 直接回车

Created directory '/home/sysadmin/.ssh'.

Enter passphrase (empty for no passphrase): 直接回车

Enter same passphrase again: 直接回车

Your identification has been saved in /home/sysadmin/.ssh/id_rsa. 私钥路径

Your public key has been saved in /home/sysadmin/.ssh/id_rsa.pub. 公钥路径

The key fingerprint is:

SHA256:2jiM64WbNMHu3n6wrHIz3dED8ezKDcFCRnZ4erps8uo sysadmin@admin-node

The key's randomart image is:

+---[RSA 2048]----+

| .o.. |

| .+.o |

| o + + |

| . o = o |

| o +S= |

| . =o+o + |

| *+**.= . |

| .o=BB.= . |

| *EX+. |

+----[SHA256]-----+

[sysadmin@admin-node ~]$

2、将密钥拷贝到其他节点

ssh-copy-id sysadmin@10.100.50.128

ssh-copy-id sysadmin@10.100.50.129

ssh-copy-id sysadmin@10.100.50.130

尝试登录测试

[sysadmin@admin-node ~]$10.100.50.130

Last login: Fri May 4 14:25:14 2018 from 10.100.50.127

[sysadmin@osdb-node2 ~]$

7、配置hosts解析

[root@admin-node ~]# cat /etc/hosts

127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4

::1 localhost localhost.localdomain localhost6 localhost6.localdomain6

10.100.50.127 admin-node

10.100.50.128 mon-node1

10.100.50.129 osd-node1

10.100.50.130 osd-node2

8、修改管理节点的~/.ssh/config 简化SSH登录

[sysadmin@admin-node ~]$ cat ~/.ssh/config

Host mon-node1

Hostname mon-node1

User sysadmin

Host osd-node1

Hostname osd-node1

User sysadmin

Host osd-node2

Hostname osd-node2

User sysadmin

登录测试:

[sysadmin@admin-node ~]$ ssh osd-node2

Last login: Fri May 4 16:55:54 2018 from admin-node

[sysadmin@osd-node2 ~]$

二、安装部署

1、创建配置文件夹用来保存配置文件和密钥对

/home/sysadmin/my-cluster

2、禁用 requiretty

将Defaults !visiblepw 修改为Defaults visiblepw

报错信息如下:

[ceph_deploy.cli][INFO ] gpg_url : None

[ceph_deploy.install][DEBUG ] Installing stable version jewel on cluster ceph hosts mon-node1

[ceph_deploy.install][DEBUG ] Detecting platform for host mon-node1 ...

[mon-node1][DEBUG ] connection detected need for sudo

We trust you have received the usual lecture from the local System

Administrator. It usually boils down to these three things:

#1) Respect the privacy of others.

#2) Think before you type.

#3) With great power comes great responsibility.

sudo: no tty present and no askpass program specified

[ceph_deploy][ERROR ] RuntimeError: connecting to host: mon-node1 resulted in errors: IOError cannot send (already closed?)

[sysadmin@admin-node my-cluster]$

在某些发行版(如 centos )上,执行 ceph-deploy 命令时,如果你的 Ceph 节点默认设置了 requiretty 那就会遇到报错。可以这样禁用此功能:

执行 sudo visudo ,找到 Defaults requiretty 选项,把它改为 Defaults:ceph !requiretty ,这样 ceph-deploy 就能用 ceph 用户登录并使用 sudo 了。

3、如何删除ceph

如果在某些地方碰到麻烦,想从头再来,可以用下列命令清除配置:

ceph-deploy purgedata {ceph-node} [{ceph-node}]

ceph-deploy forgetkeys

用下列命令可以连 Ceph 安装包一起清除:

ceph-deploy purge {ceph-node} [{ceph-node}]

如果执行了 purge ,你必须重新安装 Ceph 。

4、创建集群

1、在所有节点安装ceph

如果报错显示:[ceph_deploy][ERROR ] RuntimeError: NoSectionError: No section: 'ceph-source'

需要修改epel源,或者yum makecache,或者删除/etc/yum.repos.d/中的repo缓存文件如: rm -rf ceph.repo.rpmnew ceph.repo.rpmsave epel.repo.rpmnew

wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo

ceph-deploy install osd-node1

如果报下面错误,需要将/etc/yum.repos.d文件夹中的ceph.repo.rpmnew、epel-testing.repo、ceph.repo.rpmsave 、 epel.repo.rpmnew删除。

[osd-node1][WARNIN] ensuring that /etc/yum.repos.d/ceph.repo contains a high priority

[ceph_deploy][ERROR ] RuntimeError: NoSectionError: No section: 'ceph-source'

2、运行下面命令创建监控节点集群,目前只有一台作为monitor

ceph-deploy new mon-node1

如果遇到报错:sudo: no tty present and no askpass program specified

[ceph_deploy][ERROR ] RuntimeError: connecting to host: mon-node1 resulted in errors: IOError cannot send (already closed?)

运行 sudo visudo并添加下面一行

sysadmin ALL=(ALL) NOPASSWD: ALL

3、配置monitor并收集密钥,必须在my-cluster目录下运行

[sysadmin@admin-node my-cluster]$ ceph-deploy mon create-initial

..........................

.........................

[ceph_deploy.gatherkeys][INFO ] Storing ceph.client.admin.keyring

[ceph_deploy.gatherkeys][INFO ] Storing ceph.bootstrap-mds.keyring

[ceph_deploy.gatherkeys][INFO ] Storing ceph.bootstrap-mgr.keyring

[ceph_deploy.gatherkeys][INFO ] keyring 'ceph.mon.keyring' already exists

[ceph_deploy.gatherkeys][INFO ] Storing ceph.bootstrap-osd.keyring

[ceph_deploy.gatherkeys][INFO ] Storing ceph.bootstrap-rgw.keyring

[ceph_deploy.gatherkeys][INFO ] Destroy temp directory /tmp/tmpeANrgr

[sysadmin@admin-node my-cluster]$ ls

ceph.bootstrap-mds.keyring ceph.bootstrap-osd.keyring ceph.client.admin.keyring ceph-deploy-ceph.log

ceph.bootstrap-mgr.keyring ceph.bootstrap-rgw.keyring ceph.conf ceph.mon.keyring

[sysadmin@admin-node my-cluster]$

4、部署过程中出现问题重新部署时报错如下:

[mon-node1][INFO ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.mon-node1.asok mon_status

[mon-node1][ERROR ] no valid command found; 10 closest matches:

[mon-node1][ERROR ] config set <var> <val> [<val>...]

[mon-node1][ERROR ] version

[mon-node1][ERROR ] git_version

[mon-node1][ERROR ] help

[mon-node1][ERROR ] config show

[mon-node1][ERROR ] get_command_descriptions

[mon-node1][ERROR ] config get <var>

[mon-node1][ERROR ] perfcounters_dump

[mon-node1][ERROR ] 2

[mon-node1][ERROR ] config diff

[mon-node1][ERROR ] admin_socket: invalid command

[ceph_deploy.mon][WARNIN] mon.mon-node1 monitor is not yet in quorum, tries left: 2

[ceph_deploy.mon][WARNIN] waiting 15 seconds before retrying

[mon-node1][INFO ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.mon-node1.asok mon_status

[mon-node1][ERROR ] admin_socket: exception getting command descriptions: [Errno 2] No such file or directory

[ceph_deploy.mon][WARNIN] mon.mon-node1 monitor is not yet in quorum, tries left: 1

[ceph_deploy.mon][WARNIN] waiting 20 seconds before retrying

[ceph_deploy.mon][ERROR ] Some monitors have still not reached quorum:

[ceph_deploy.mon][ERROR ] mon-node1

解决方式:

1)、删除安装包

[sysadmin@admin-node my-cluster]$ ceph-deploy purge mon-node1

[ceph_deploy.conf][DEBUG ] found configuration file at: /home/sysadmin/.cephdeploy.conf

[ceph_deploy.cli][INFO ] Invoked (1.5.39): /bin/ceph-deploy purge mon-node1

[ceph_deploy.cli][INFO ] ceph-deploy options:

[ceph_deploy.cli][INFO ] username : None

[ceph_deploy.cli][INFO ] verbose : False

[ceph_deploy.cli][INFO ] overwrite_conf : False

[ceph_deploy.cli][INFO ] quiet : False

[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f31db696ef0>

[ceph_deploy.cli][INFO ] cluster : ceph

[ceph_deploy.cli][INFO ] host : ['mon-node1']

[ceph_deploy.cli][INFO ] func : <function purge at 0x7f31dbf89500>

[ceph_deploy.cli][INFO ] ceph_conf : None

[ceph_deploy.cli][INFO ] default_release : False

[ceph_deploy.install][INFO ] note that some dependencies *will not* be removed because they can cause issues with qemu-kvm

[ceph_deploy.install][INFO ] like: librbd1 and librados2

[ceph_deploy.install][DEBUG ] Purging on cluster ceph hosts mon-node1

[ceph_deploy.install][DEBUG ] Detecting platform for host mon-node1 ...

[mon-node1][DEBUG ] connection detected need for sudo

[mon-node1][DEBUG ] connected to host: mon-node1

[mon-node1][DEBUG ] detect platform information from remote host

[mon-node1][DEBUG ] detect machine type

[ceph_deploy.install][INFO ] Distro info: CentOS linux 7.5.1804 Core

[mon-node1][INFO ] Purging Ceph on mon-node1

[mon-node1][INFO ] Running command: sudo yum -y -q remove ceph ceph-release ceph-common ceph-radosgw

[mon-node1][DEBUG ] warning: /etc/yum.repos.d/ceph.repo saved as /etc/yum.repos.d/ceph.repo.rpmsave

[mon-node1][INFO ] Running command: sudo yum clean all

[mon-node1][DEBUG ] Loaded plugins: fastestmirror, langpacks, priorities

[mon-node1][DEBUG ] Cleaning repos: base epel extras updates

[mon-node1][DEBUG ] Cleaning up everything

[mon-node1][DEBUG ] Maybe you want: rm -rf /var/cache/yum, to also free up space taken by orphaned data from disabled or removed repos

[mon-node1][DEBUG ] Cleaning up list of fastest mirrors

[sysadmin@admin-node my-cluster]$

2)、 删除key

[sysadmin@admin-node my-cluster]$ ls

ceph.conf ceph-deploy-ceph.log ceph.mon.keyring

[sysadmin@admin-node my-cluster]$ ceph-deploy forgetkeys

[ceph_deploy.conf][DEBUG ] found configuration file at: /home/sysadmin/.cephdeploy.conf

[ceph_deploy.cli][INFO ] Invoked (1.5.39): /bin/ceph-deploy forgetkeys

[ceph_deploy.cli][INFO ] ceph-deploy options:

[ceph_deploy.cli][INFO ] username : None

[ceph_deploy.cli][INFO ] verbose : False

[ceph_deploy.cli][INFO ] overwrite_conf : False

[ceph_deploy.cli][INFO ] quiet : False

[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7fe4eaed35a8>

[ceph_deploy.cli][INFO ] cluster : ceph

[ceph_deploy.cli][INFO ] func : <function forgetkeys at 0x7fe4eb7c2e60>

[ceph_deploy.cli][INFO ] ceph_conf : None

[ceph_deploy.cli][INFO ] default_release : False

[sysadmin@admin-node my-cluster]$ ls

ceph.conf ceph-deploy-ceph.log

3)、删除部署的数据文件

[sysadmin@admin-node my-cluster]$ ceph-deploy purgedata mon-node1

[ceph_deploy.conf][DEBUG ] found configuration file at: /home/sysadmin/.cephdeploy.conf

[ceph_deploy.cli][INFO ] Invoked (1.5.39): /bin/ceph-deploy purgedata mon-node1

[ceph_deploy.cli][INFO ] ceph-deploy options:

[ceph_deploy.cli][INFO ] username : None

[ceph_deploy.cli][INFO ] verbose : False

[ceph_deploy.cli][INFO ] overwrite_conf : False

[ceph_deploy.cli][INFO ] quiet : False

[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f51e0d0a878>

[ceph_deploy.cli][INFO ] cluster : ceph

[ceph_deploy.cli][INFO ] host : ['mon-node1']

[ceph_deploy.cli][INFO ] func : <function purgedata at 0x7f51e15fd578>

[ceph_deploy.cli][INFO ] ceph_conf : None

[ceph_deploy.cli][INFO ] default_release : False

[ceph_deploy.install][DEBUG ] Purging data from cluster ceph hosts mon-node1

[mon-node1][DEBUG ] connection detected need for sudo

[mon-node1][DEBUG ] connected to host: mon-node1

[mon-node1][DEBUG ] detect platform information from remote host

[mon-node1][DEBUG ] detect machine type

[mon-node1][DEBUG ] find the location of an executable

[mon-node1][DEBUG ] connection detected need for sudo

[mon-node1][DEBUG ] connected to host: mon-node1

[mon-node1][DEBUG ] detect platform information from remote host

[mon-node1][DEBUG ] detect machine type

[ceph_deploy.install][INFO ] Distro info: CentOS Linux 7.5.1804 Core

[mon-node1][INFO ] purging data on mon-node1

[mon-node1][INFO ] Running command: sudo rm -rf --one-file-system -- /var/lib/ceph

[mon-node1][INFO ] Running command: sudo rm -rf --one-file-system -- /etc/ceph/

[sysadmin@admin-node my-cluster]$

4)、重新安装部署ceph

5、修改存储副本数

[sysadmin@admin-node my-cluster]$ vim ceph.conf

添加下面信息,配置public网络,配置mon之间时间差为2s(默认为0.05s),配置副本数量为2

public network = 10.100.50.0/24

mon clock drift allowed = 2

osd pool default size = 2

5、格式化osd磁盘

查看osd节点磁盘

[sysadmin@admin-node my-cluster]$ ceph-deploy disk list osd-node1

[ceph_deploy.conf][DEBUG ] found configuration file at: /home/sysadmin/.cephdeploy.conf

[ceph_deploy.cli][INFO ] Invoked (1.5.39): /bin/ceph-deploy disk list osd-node1

[ceph_deploy.cli][INFO ] ceph-deploy options:

[ceph_deploy.cli][INFO ] username : None

[ceph_deploy.cli][INFO ] verbose : False

[ceph_deploy.cli][INFO ] overwrite_conf : False

[ceph_deploy.cli][INFO ] subcommand : list

[ceph_deploy.cli][INFO ] quiet : False

[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x111dcb0>

[ceph_deploy.cli][INFO ] cluster : ceph

[ceph_deploy.cli][INFO ] func : <function disk at 0x1108398>

[ceph_deploy.cli][INFO ] ceph_conf : None

[ceph_deploy.cli][INFO ] default_release : False

[ceph_deploy.cli][INFO ] disk : [('osd-node1', None, None)]

[osd-node1][DEBUG ] connection detected need for sudo

[osd-node1][DEBUG ] connected to host: osd-node1

[osd-node1][DEBUG ] detect platform information from remote host

[osd-node1][DEBUG ] detect machine type

[osd-node1][DEBUG ] find the location of an executable

[ceph_deploy.osd][INFO ] Distro info: CentOS Linux 7.4.1708 Core

[ceph_deploy.osd][DEBUG ] Listing disks on osd-node1...

[osd-node1][DEBUG ] find the location of an executable

[osd-node1][INFO ] Running command: sudo /usr/sbin/ceph-disk list

[osd-node1][DEBUG ] /dev/dm-0 other, ext4, mounted on /

[osd-node1][DEBUG ] /dev/sda :

[osd-node1][DEBUG ] /dev/sda3 other, LVM2_member

[osd-node1][DEBUG ] /dev/sda2 swap, swap

[osd-node1][DEBUG ] /dev/sda1 other, ext4, mounted on /boot

[osd-node1][DEBUG ] /dev/sdb other, unknown

[osd-node1][DEBUG ] /dev/sdc other, unknown

[osd-node1][DEBUG ] /dev/sdd other, unknown

[osd-node1][DEBUG ] /dev/sr0 other, unknown

[sysadmin@admin-node my-cluster]$

将osd节点的sdb作为日志盘

将osd节点的sdc作为数据盘并分别进行分区格式化。使用ceph推荐的xfs文件系统和gpt分区。

parted -a optimal --script /dev/sdc -- mktable gpt

parted -a optimal --script /dev/sdc -- mkpart primary xfs 0% 100%

mkfs.xfs /dev/sdc1

6、准备OSD磁盘

查看磁盘:

[sysadmin@admin-node my-cluster]$ ceph-deploy disk list osd-node1

[ceph_deploy.conf][DEBUG ] found configuration file at: /home/sysadmin/.cephdeploy.conf

[ceph_deploy.cli][INFO ] Invoked (1.5.39): /bin/ceph-deploy disk list osd-node1

[ceph_deploy.cli][INFO ] ceph-deploy options:

[ceph_deploy.cli][INFO ] username : None

[ceph_deploy.cli][INFO ] verbose : False

[ceph_deploy.cli][INFO ] overwrite_conf : False

[ceph_deploy.cli][INFO ] subcommand : list

[ceph_deploy.cli][INFO ] quiet : False

[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x1580cb0>

[ceph_deploy.cli][INFO ] cluster : ceph

[ceph_deploy.cli][INFO ] func : <function disk at 0x156b398>

[ceph_deploy.cli][INFO ] ceph_conf : None

[ceph_deploy.cli][INFO ] default_release : False

[ceph_deploy.cli][INFO ] disk : [('osd-node1', None, None)]

[osd-node1][DEBUG ] connection detected need for sudo

[osd-node1][DEBUG ] connected to host: osd-node1

[osd-node1][DEBUG ] detect platform information from remote host

[osd-node1][DEBUG ] detect machine type

[osd-node1][DEBUG ] find the location of an executable

[ceph_deploy.osd][INFO ] Distro info: CentOS Linux 7.4.1708 Core

[ceph_deploy.osd][DEBUG ] Listing disks on osd-node1...

[osd-node1][DEBUG ] find the location of an executable

[osd-node1][INFO ] Running command: sudo /usr/sbin/ceph-disk list

[osd-node1][DEBUG ] /dev/dm-0 other, ext4, mounted on /

[osd-node1][DEBUG ] /dev/sda :

[osd-node1][DEBUG ] /dev/sda3 other, LVM2_member

[osd-node1][DEBUG ] /dev/sda2 swap, swap

[osd-node1][DEBUG ] /dev/sda1 other, ext4, mounted on /boot

[osd-node1][DEBUG ] /dev/sdb :

[osd-node1][DEBUG ] /dev/sdb1 other, xfs

[osd-node1][DEBUG ] /dev/sdc :

[osd-node1][DEBUG ] /dev/sdc1 other, xfs

[osd-node1][DEBUG ] /dev/sdd other, unknown

[osd-node1][DEBUG ] /dev/sr0 other, unknown

[sysadmin@admin-node my-cluster]$

[sysadmin@admin-node my-cluster]$ ceph-deploy disk list osd-node2

[ceph_deploy.conf][DEBUG ] found configuration file at: /home/sysadmin/.cephdeploy.conf

[ceph_deploy.cli][INFO ] Invoked (1.5.39): /bin/ceph-deploy disk list osd-node2

[ceph_deploy.cli][INFO ] ceph-deploy options:

[ceph_deploy.cli][INFO ] username : None

[ceph_deploy.cli][INFO ] verbose : False

[ceph_deploy.cli][INFO ] overwrite_conf : False

[ceph_deploy.cli][INFO ] subcommand : list

[ceph_deploy.cli][INFO ] quiet : False

[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x1122cb0>

[ceph_deploy.cli][INFO ] cluster : ceph

[ceph_deploy.cli][INFO ] func : <function disk at 0x110d398>

[ceph_deploy.cli][INFO ] ceph_conf : None

[ceph_deploy.cli][INFO ] default_release : False

[ceph_deploy.cli][INFO ] disk : [('osd-node2', None, None)]

[osd-node2][DEBUG ] connection detected need for sudo

[osd-node2][DEBUG ] connected to host: osd-node2

[osd-node2][DEBUG ] detect platform information from remote host

[osd-node2][DEBUG ] detect machine type

[osd-node2][DEBUG ] find the location of an executable

[ceph_deploy.osd][INFO ] Distro info: CentOS Linux 7.4.1708 Core

[ceph_deploy.osd][DEBUG ] Listing disks on osd-node2...

[osd-node2][DEBUG ] find the location of an executable

[osd-node2][INFO ] Running command: sudo /usr/sbin/ceph-disk list

[osd-node2][DEBUG ] /dev/dm-0 other, ext4, mounted on /

[osd-node2][DEBUG ] /dev/sda :

[osd-node2][DEBUG ] /dev/sda3 other, LVM2_member

[osd-node2][DEBUG ] /dev/sda2 swap, swap

[osd-node2][DEBUG ] /dev/sda1 other, ext4, mounted on /boot

[osd-node2][DEBUG ] /dev/sdb :

[osd-node2][DEBUG ] /dev/sdb1 other, xfs

[osd-node2][DEBUG ] /dev/sdc :

[osd-node2][DEBUG ] /dev/sdc1 other, xfs

[osd-node2][DEBUG ] /dev/sdd other, unknown

[osd-node2][DEBUG ] /dev/sr0 other, unknown

[sysadmin@admin-node my-cluster]$

下面操作必须在admin的/etc/ceph下执行(请务必注意,格式为: ip地址:osd磁盘:日志盘)

ceph-deploy osd prepare osd-node1:sdc1:sdb1

ceph-deploy osd prepare osd-node2:sdc1:sdb1

如果报下面错误表示配置文件在各个节点上没有同步,需要手动推送一下:ceph-deploy --overwrite-conf config push admin-node osd-node1 osd-node2

[ceph_deploy.osd][ERROR ] RuntimeError: config file /etc/ceph/ceph.conf exists with different content; use --overwrite-conf to overwrite

[ceph_deploy][ERROR ] GenericError: Failed to create 1 OSDs

7、激活OSD磁盘

ceph-deploy osd activate osd-node1:sdc1:sdb1

ceph-deploy osd activate osd-node2:sdc1:sdb1

如果遇到下面错误,提示权限问题,需要修改磁盘的属主和属组为ceph(不是部署ceph创建的账号,而是ceph自己创建的‘ceph’账户)

[osd-node1][WARNIN] ceph_disk.main.Error: Error: ['ceph-osd', '--cluster', 'ceph', '--mkfs', '--mkkey', '-i', u'0', '--monmap', '/var/lib/ceph/tmp/mnt.wMliCA/activate.monmap', '--osd-data', '/var/lib/ceph/tmp/mnt.wMliCA', '--osd-journal', '/var/lib/ceph/tmp/mnt.wMliCA/journal', '--osd-uuid', u'14a500fd-a030-427a-b007-f16f6f4bbd4d', '--keyring', '/var/lib/ceph/tmp/mnt.wMliCA/keyring', '--setuser', 'ceph', '--setgroup', 'ceph'] failed : 2018-05-23 10:47:41.254049 7fece10c8800 -1 filestore(/var/lib/ceph/tmp/mnt.wMliCA) mkjournal error creating journal on /var/lib/ceph/tmp/mnt.wMliCA/journal: (13) Permission denied

[osd-node1][WARNIN] 2018-05-23 10:47:41.254068 7fece10c8800 -1 OSD::mkfs: ObjectStore::mkfs failed with error -13

[osd-node1][WARNIN] 2018-05-23 10:47:41.254123 7fece10c8800 -1 ** ERROR: error creating empty object store in /var/lib/ceph/tmp/mnt.wMliCA: (13) Permission denied

[osd-node1][WARNIN]

[osd-node1][ERROR ] RuntimeError: command returned non-zero exit status: 1

[ceph_deploy][ERROR ] RuntimeError: Failed to execute command: /usr/sbin/ceph-disk -v activate --mark-init systemd --mount /dev/sdc1

需要修改磁盘的属主和属组为ceph

[root@osd-node1 yum.repos.d]# cat /etc/passwd

root:x:0:0:root:/root:/bin/bash

bin:x:1:1:bin:/bin:/sbin/nologin

......

sysadmin:x:1000:1000:sysadmin:/home/sysadmin:/bin/bash

ceph:x:167:167:Ceph daemons:/var/lib/ceph:/sbin/nologin

[root@osd-node1 yum.repos.d]# chown ceph:ceph /dev/sdb1

[root@osd-node1 yum.repos.d]# chown ceph:ceph /dev/sdc1

[root@osd-node1 yum.repos.d]# ll /dev/sdb1

brw-rw---- 1 ceph ceph 8, 17 May 23 10:29 /dev/sdb1

[root@osd-node1 yum.repos.d]# ll /dev/sdc1

brw-rw---- 1 ceph ceph 8, 33 May 23 10:33 /dev/sdc1

[root@osd-node1 yum.repos.d]#

8、一次性完成磁盘的准备和激活

ceph-deploy osd create osd-node1:sdc1:sdb1

ceph-deploy osd create osd-node2:sdc1:sdb1

9、查看osd视图

[sysadmin@admin-node ceph]$ ceph osd tree

ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY

-1 0.03897 root default

-2 0.01949 host osd-node1

0 0.01949 osd.0 up 1.00000 1.00000

-3 0.01949 host osd-node2

1 0.01949 osd.1 up 1.00000 1.00000

[sysadmin@admin-node ceph]$

如果报下面错误,因为我的key都是放到/home/sysadmin/my-cluster/下面,所以直接将下面文件全部拷贝到/etc/ceph/下面,并将权限改为755。

[sysadmin@admin-node my-cluster]$ ceph osd tree

2018-05-23 11:08:38.913940 7fbd0e84d700 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory

2018-05-23 11:08:38.913953 7fbd0e84d700 -1 monclient(hunting): ERROR: missing keyring, cannot use cephx for authentication

2018-05-23 11:08:38.913955 7fbd0e84d700 0 librados: client.admin initialization error (2) No such file or directory

Error connecting to cluster: ObjectNotFound

[sysadmin@admin-node my-cluster]$

[root@admin-node ceph]# cp /home/sysadmin/my-cluster/* /etc/ceph/

cp: overwrite ‘/etc/ceph/ceph.bootstrap-mds.keyring’? y

cp: overwrite ‘/etc/ceph/ceph.conf’? y

[root@admin-node ceph]# chmod 755 *

[root@admin-node ceph]# ll

total 172

-rwxr-xr-x 1 root root 113 May 23 11:08 ceph.bootstrap-mds.keyring

-rwxr-xr-x 1 root root 71 May 23 11:08 ceph.bootstrap-mgr.keyring

-rwxr-xr-x 1 root root 113 May 23 11:08 ceph.bootstrap-osd.keyring

-rwxr-xr-x 1 root root 113 May 23 11:08 ceph.bootstrap-rgw.keyring

-rwxr-xr-x 1 root root 129 May 23 11:08 ceph.client.admin.keyring

-rwxr-xr-x 1 root root 225 May 23 11:08 ceph.conf

-rwxr-xr-x 1 root root 142307 May 23 11:08 ceph-deploy-ceph.log

-rwxr-xr-x 1 root root 73 May 23 11:08 ceph.mon.keyring

-rwxr-xr-x. 1 root root 92 Oct 4 2017 rbdmap

-rwxr-xr-x 1 root root 0 May 23 10:33 tmpAlNIWB

[root@admin-node ceph]#

10、查看集群状态

[sysadmin@admin-node my-cluster]$ ceph -s

cluster 77524d79-bc18-471a-8956-f5045579cc74

health HEALTH_OK

monmap e1: 1 mons at {mon-node1=10.100.50.128:6789/0}

election epoch 3, quorum 0 mon-node1

osdmap e11: 2 osds: 2 up, 2 in

flags sortbitwise,require_jewel_osds

pgmap v19: 64 pgs, 1 pools, 0 bytes data, 0 objects

68400 kB used, 40869 MB / 40936 MB avail

64 active+clean

[sysadmin@admin-node my-cluster]$ ceph osd tree

ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY

-1 0.03897 root default

-2 0.01949 host osd-node1

0 0.01949 osd.0 up 1.00000 1.00000

-3 0.01949 host osd-node2

1 0.01949 osd.1 up 1.00000 1.00000

[sysadmin@admin-node my-cluster]$

声明:本站部分内容来自互联网,如有版权侵犯或其他问题请与我们联系,我们将立即删除或处理。
▍相关推荐
更多资讯 >>>