本文共 7123 字,大约阅读时间需要 23 分钟。
192.168.1.220 节点1 (mon, ceph-deploy)
192.168.1.221 节点2 (osd) 192.168.1.222 节点3 (osd)修改/etc/selinux/config, 将值设为disabled, reboot
这里创建的用户为:cent 密码是cent
sudo useradd -d /home/cent -m cent sudo passwd cent echo “cent ALL = (root) NOPASSWD:ALL” | sudo tee /etc/sudoers.d/cent sudo chmod 0440 /etc/sudoers.d/cent su cent (切换到cent用户,不能用root或sudo执行ceph-deploy命令,重要:如果你是用不同的用户登录的,就不要用sudo或者root权限运行ceph-deploy,因为在远程的主机上不能发出sudo命令 ) sudo visudo (修改其中Defaults requiretty为Defaults:cent !requiretty) sudo hostname node1 (其他的两个节点就是node2和node3了) sudo yum install ntp ntpdate ntp-doc sudo yum install openssh-server修改node1的hosts文件,配置节点信息
vim /etc/hosts 192.168.1.220 node1192.168.1.221 node2192.168.1.222 node3在/etc/yum.repos.d/ 下创建一个ceph.repo文件,写入以下内容
[Ceph] name=Ceph packages for $basearch baseurl= enabled=1 gpgcheck=0 type=rpm-md gpgkey= priority=1 [Ceph-noarch] name=Ceph noarch packages baseurl= enabled=1 gpgcheck=0 type=rpm-md gpgkey= priority=1 [ceph-source] name=Ceph source packages baseurl= enabled=1 gpgcheck=0 type=rpm-md gpgkey= priority=1上面的这个是163的源,我们也可以使用阿里云的
[ceph] name=ceph baseurl= gpgcheck=0 [ceph-noarch] name=cephnoarch baseurl= gpgcheck=0sudo yum install yum-plugin-priorities
sudo yum install ceph-deployssh-keygen (一直回车,使用默认配置)
ssh-copy-id cent@node1
ssh-copy-id cent@node2 ssh-copy-id cent@node3vim ~/.ssh/config (创建config文件并写入以下内容)
Host node1
Hostname node1 User cent Host node2 Hostname node2 User cent Host node3 Hostname node3 User centsudo chmod 600 config (赋予config文件权限)
注意:在这里要说的是我搭建的ceph集群(1个mon节点,2个osd节点)没有搭建管理节点,直接把mon当作管理节点使用。
mkdir my-cluster cd my-cluster ceph-deploy new node1 (成功后会有ceph.conf)vim ceph.conf (在global段最后添加)
osd pool default size = 2在node1中执行:ceph-deploy install node1 node2 node3
安装完成后在node1中执行:ceph-deploy mon create-initialssh node2
sudo mkdir /var/local/osd0 sudo chmod -R 777 /var/local/osd0/ exitssh node3
sudo mkdir /var/local/osd1 sudo chmod -R 777 /var/local/osd1/ exitceph-deploy osd prepare node2:/var/local/osd0 node3:/var/local/osd1
ceph-deploy osd activate node2:/var/local/osd0 node3:/var/local/osd1
ceph-deploy admin node1 node2 node3
sudo chmod +r /etc/ceph/ceph.client.admin.keyring
[cent@node1 my-cluster]$ ceph -s
cluster f2891898-aa3b-4bce-8bf1-668b8cf5b45a health HEALTH_OK monmap e1: 1 mons at {node1=192.168.1.220:6789/0} election epoch 3, quorum 0 node1 osdmap e10: 2 osds: 2 up, 2 in flags sortbitwise,require_jewel_osds pgmap v225: 64 pgs, 1 pools, 0 bytes data, 0 objects 16205 MB used, 40039 MB / 56244 MB avail 64 active+cleanLoaded plugins: langpacks, priorities, product-id, search-disabled-repos, subscription-manager
This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register. 53 packages excluded due to repository priority protections Resolving Dependencies –> Running transaction check —> Package ceph-deploy.noarch 0:1.5.37-0 will be updated —> Package ceph-deploy.noarch 0:1.5.38-0 will be an update –> Processing Dependency: python-distribute for package: ceph-deploy-1.5.38-0.noarch –> Running transaction check —> Package python-setuptools.noarch 0:0.9.8-4.el7 will be installed Removing python-setuptools.noarch 0:0.9.8-4.el7 - u due to obsoletes from installed python2-setuptools-22.0.5-1.el7.noarch –> Restarting Dependency Resolution with new changes. –> Running transaction check —> Package python-setuptools.noarch 0:0.9.8-4.el7 will be installed –> Processing Dependency: python-distribute for package: ceph-deploy-1.5.38-0.noarch –> Running transaction check —> Package python-setuptools.noarch 0:0.9.8-3.el7 will be installed Removing python-setuptools.noarch 0:0.9.8-3.el7 - u due to obsoletes from installed python2-setuptools-22.0.5-1.el7.noarch –> Restarting Dependency Resolution with new changes. –> Running transaction check —> Package python-setuptools.noarch 0:0.9.8-3.el7 will be installed –> Processing Dependency: python-distribute for package: ceph-deploy-1.5.38-0.noarch –> Finished Dependency Resolution Error: Package: ceph-deploy-1.5.38-0.noarch (ceph-noarch) Requires: python-distribute You could try using –skip-broken to work around the problem Found 1 pre-existing rpmdb problem(s), ‘yum check’ output follows: ceph-deploy-1.5.37-0.noarch has missing requires of python-distribute 解决: wget rpm -Uvh ceph-deploy-1.5.39-0.noarch.rpm –nodepsceph osd pool create cephpool_01 16 16 #创建一个16个pg,16个pgd的池cephpool,如果进行ceph搭建的时候已经存在pool,可以不用额外创建,例如可以选择上面已经存在的data、metadata、rbd作为pool pool ‘cephpool’ created ceph osd pool create cephpool_01 16 16 #创建一个16个pg,16个pgd的池cephpool,如果进行ceph搭建的时候已经存在pool,可以不用额外创建,例如可以选择上面已经存在的data、metadata、rbd作为pool pool ‘cephpool’ created ceph osd pool set cephpool size 2 #这个就是副本个数,因为我们就是两个osd,所以就设置为2
注意:如果单单只在mypool这个池中创建对象object2,不拷贝文件的指令为:
rados create object2 -p mypool[cent@node2 1.5_head]$ pwd/var/local/osd0/current/1.5_head[cent@node2 1.5_head]$ lltotal 8-rw-r--r-- 1 ceph ceph 0 Nov 27 20:26 __head_00000005__1-rw-r--r-- 1 ceph ceph 42 Nov 27 21:32 object\u01__head_376EEA75__1[cent@node2 1.5_head]$ [cent@node3 1.5_head]$ pwd /var/local/osd1/current/1.5_head[cent@node3 1.5_head]$ lltotal 8-rw-r--r-- 1 ceph ceph 0 Nov 27 20:26 __head_00000005__1-rw-r--r-- 1 ceph ceph 42 Nov 27 21:32 object\u01__head_376EEA75__1[cent@node3 1.5_head]$
$ceph osd treeID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY -1 0.05359 root default -2 0.02679 host node2 0 0.02679 osd.0 up 1.00000 1.00000 -3 0.02679 host node3 1 0.02679 osd.1 up 1.00000 1.00000 [cent@node1 local]$ 使用get命令代替put命令举例,将刚刚上传的文件拷贝到本机,重命名为fileget[outfile] fetch object$rados get object1 /home/liangwl/getfile -p cephpool[cent@node1 local]$ rados get object_01 /tmp/file -p cephpool_01[cent@node1 local]$ cd /tmp/[cent@node1 tmp]$ lsfilesystemd-private-27b6c2f48ffc423fa461609e5e62a630-ceph-mon@node1.service-nssea4[cent@node1 tmp]$ lltotal 4-rw-r--r-- 1 cent cent 42 Nov 27 22:22 filedrwx------ 3 root root 16 Nov 27 19:27 systemd-private-27b6c2f48ffc423fa461609e5e62a630-ceph-mon@node1.service-nssea4[cent@node1 tmp]$ vim fileI am a student and from chd university!!!
转载地址:http://wrvmb.baihongyu.com/